With today's advancements in AI we often see media reports of superhuman performance in some task. These often quite dramatic announcements should however be treated with a dose of skepticism, as many of them may result purely from pathologies in measures applied to the problem. In this post I'd like to show what I mean by a "measurement pathology". I therefore constructed a simple example, which hopefully will to get the point across.
Example: measuring lemons
Imagine somebody came to your machine learning lab/company with a following problem: identify lemons in a photo. This problems sounds clear enough, but in order to build an actual machine learning system that will accomplish such task, we have to formalize what this means in the form of a measure (of performance). The way this typically begins, is that some student will laboriously label the dataset. For the sake of this example, my dataset consists of a single image with approximately 50 lemons in it:
As mentioned the picture was carefully labeled:
With human labeled mask here:
Now that there is a ground truth label we can establish a measurement. One way to formally express the desire to identify lemons in this picture … Read more...