Those who regularly read my blog are aware that I'm a bit skeptical of the current AI "benchmarks" and whether they serve the field well. In particular I think that the lack of definition of intelligence is the major elephant in the room. For a proof that this apparently is not a well recognized issue take this recent twitter thread:
Aside from the broader context of this thread discussing evolution and learning, Ilya Sutskever, one of the leading deep learning researchers, is expressing a nice sounding empirical approach: we don't have to argue, we can just test. Well, as it may clearly follow from my reply, I don't think this is really the case. I have no idea what Sutskever means by "obviously more intelligent" - do you? Does he mean better ability to overfit existing datasets? Play yet another Atari computer game? I find this approach prevalent in the circles associated with deep learning, as if this field had some very well defined empirical measurement foundation. Quite the opposite is true: the field is driven by a dogma that a "dataset" (blessed as standard in the field by some committee) and some God given measure (put Hinton, LeCun or … Read more...
A year ago I wrote a post summarizing the disengagement data that the state of California requires from the companies developing Autonomous Vehicles. The thesis of my post back then was that the achieved disengagement rates were not yet comparable to human safety levels. It is 2018 now and new data has been released to it is perhaps a good time to revisit my claims.
Let me first show the data:
And in a separate plot for better readability just Waymo, the unquestionable leader of that race (so far at least):
So where did that data came from? There are several sources:
- California DMV disengagement reports for years 2017, 2016 and 2015
- Insurance Institute for Highway Safety fatality data.
- RAND driving to safety report.
- Bureau of Transportation Statistics
One can easily verify the numbers plotted above with all of these sources. Now before we start any discussion let's recall what California defines as a qualifying event:
“a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of
… Read more...
Electric cars are great. They don't pollute, drive without making noise, have incredible responsiveness and torque all over the RPM range. There are limited number of moving parts, they don't need lubrication hence don't consume oil.
These are all true. There is no point in arguing with these facts, anyone who ever driven an electric car will concur. But there is always the other side, the one enthusiasts will not want to discuss. Let me go into a few issues I have with this technology.
All those amazing cars (such as Tesla) are based on Lithium Ion battery. Much like any other battery, this one uses electrodes, one made of lithium compound and the other out of a form of carbon such as graphite. The electrolyte in between these electrodes typically contains cobalt (typically in the form of an oxide). The exact chemistry varies between different types of cells but overall positively charged lithium ions get carried from the anode to cathode during discharge and the reverse is happening during charging. Cobalt oxide mediates the ions. So in some sense the electric car actually has zillions of moving parts if we count all these ions traveling from anode to … Read more...
In this post I'd like to present a slightly different take on AI and expose one dimension of Intelligence which we hardly explore with mainstream efforts. In order to do that, I'll use a metaphor, which should hopefully make things clear. As with every analogy, this one is also bound to be imperfect, but should be sufficient to get certain idea across.
The way I see progress in artificial intelligence (AI) could be summarized with the following (visual) metaphor:
Imagine that elevation symbolizes ability to accomplish certain level of performance in a given task, and each horizontal position (latitude/longitude) represents a task. Human intelligence is like a mountain, tall and rather flat, just like one of those buttes in Monument Valley. Let's call this mountain "Mt. Intelligence". A lot of horizontal space is covered by the hill (representing the vast amount of tasks that can be accomplished), less intelligent animals can be represented by lower hills covering different areas of task space.
In this setting our efforts in AI resemble building towers. They are very narrow and shaky, but by strapping together a bunch of cables and duct tape we can often reach elevation higher than the "human … Read more...
This post is a bit of a mixed bag about technology and fragility, a bit about AI and tiny bit on politics. You've been warned.
Back in the communist and then early capitalist Poland, where I grew up, one could often get used soviet equipment such as optics, power tools etc. Back in the day these things were relatively cheap and had the reputation of being very sturdy and essentially unbreakable (often described as pseudo Russian "gniotsa nie łamiotsa" which essentially meant you could "bend it and it would not break"). There are multiple possible reasons why that equipment was so sturdy, one hypothesis is that soviet factories could not control very well the quality of their steel and so the designers had to put in additional margin into their designs. When the materials actually turned out to be of high quality, such over engineered parts would then be extra strong. Other explanation is that some of that equipment was ex-military and therefore designed with an extra margin. Nevertheless, these often heavy and over-engineered products were contrasted in the early 90's with modern, optimized, western made things. Western stuff was obviously better designed and optimized, lighter, but as soon … Read more...
Most people (at least those with college education) are well aware of how exponential growth works. The typical (correct) intuition is that when things are growing exponentially, they may initially look like nothing, in fact things may go very slow for quite a while, but eventually there is an explosion and exponential growth eventually outpaces everything sub-exponential. What is less commonly appreciated is that exponential decay works similarly - things exist, get smaller and effectively at some point become nonexistent. It is almost as if there was a discrete transition. Let us keep that in mind while we discuss some probability theory below.
Gauss and Cauchy
Gauss and Cauchy were two very famous mathematicians, both having countless contributions in various areas of mathematics. Coincidentally, two seemingly similarly looking probability distributions are named after these two individuals. And although many people working in data science and engineering have relatively good understanding of Gaussian distribution (otherwise known as "normal" distribution), Cauchy distribution is less known. It is also a very interesting beast, as it is an example of a much less "normal" distribution than Gaussian and most intuitions from typical statistics fail in the context of Cauchy. Although Cauchy like distributions are … Read more...
One of the hallmarks of science is the reproducibility of results. It lies at the very foundation of our epistemology that objectivity of a result could only be assured if others are able to independently reproduce the experiment.
One could argue that science today actually has various issues with reproducibility, e.g. results obtained in a unique instrument (such as the LHC - Large Hydron Collider) cannot be reproduced anywhere, simply because nobody has another such instrument. At least in this case the results are in principle reproducible, and aside from the lack of another instrument, the basic scientific methodology can remain intact. Things get a bit more hairy with AI.
Determinism, reproducibility and randomness
The one hidden assumption with reproducibility is that the reality is roughly deterministic, and the results of the experiment depend deterministically on the experimental setup. After carefully tuning the initial conditions we expect the same experimental result. But things start to be more complex when our experiment itself is statistical in nature and relies on a random sample.
For example the experiment called elections: once the experiment is performed it cannot be reproduced, since the outcome of the first experiment affects substantially the system studied … Read more...
With today's advancements in AI we often see media reports of superhuman performance in some task. These often quite dramatic announcements should however be treated with a dose of skepticism, as many of them may result purely from pathologies in measures applied to the problem. In this post I'd like to show what I mean by a "measurement pathology". I therefore constructed a simple example, which hopefully will to get the point across.
Example: measuring lemons
Imagine somebody came to your machine learning lab/company with a following problem: identify lemons in a photo. This problems sounds clear enough, but in order to build an actual machine learning system that will accomplish such task, we have to formalize what this means in the form of a measure (of performance). The way this typically begins, is that some student will laboriously label the dataset. For the sake of this example, my dataset consists of a single image with approximately 50 lemons in it:
As mentioned the picture was carefully labeled:
With human labeled mask here:
Now that there is a ground truth label we can establish a measurement. One way to formally express the desire to identify lemons in this picture … Read more...
I have a few AI related posts in the pipeline, but before I publish them (most still need some work), I want to share my recent experience and some thoughts on it.
I just came back from a trip to Europe, a typical summer visit. The trip went fine, children are happy, the whole flight was uneventful. I've spent there a week, back in my hometown visiting friends and family. This time however I decided to pay attention to something different than usual, instead of focusing on stuff that has changed, I decided to seek the stuff that remained the same.
It's been more than 7 years since I moved from Poland to California, nevertheless there are countless things there which seem to not have changed at all e.g particular stores and institutions, my neighbors, bars and coffees etc. Wound up with the constant push for progress, we tend to not see how many things appear to be frozen in time.
Now let me get to a concrete example of what I'm talking about: on my way there I obviously took a transcontinental flight, one of the mayor European airlines. A nice and neat Airbus A380 welcomed us at … Read more...
Regular readers by now may have gathered that I'm skeptical about the current self driving car hype. To make things clear: this is not because I would not like to use a driverless car, or that because I think it is fundamentally impossible. My skepticism is merely caused by my concern that the technology we have right not is not mature enough for such application. That includes both the fundamental technological primitives in the space of AI as well as economic feasibility. Also the increasing hype and sensational press reports are not improving the realistic and fact based discussion that should take place.
The argument that is often repeated in popular press and used by the proponents of autonomous cars is that they will be much safer than humans. This argument is very potent and emotional, as nearly each one of us had a relative killed in a car accident and the number of these accidents is still too high (even though in absolute terms motor vehicle related fatalities are very rare). I would certainly like to see the improved safety by whatever means, the lowest hanging fruits in this space I think are: better training and testing of drivers … Read more...
This post again about the definition of AI. It was provoked by a certain tweet exchange where it turned out again that everybody understand the term AI arbitrarily. So here we go again.
Let us deal again with this fundamental question: what is and what is not AI. Determining which is artificial and which not is not the problem, the problem is determining if something is intelligent or not. This has confused, confuses and likely will confuse even very intelligent people.
Many application/research focused people, particularly in machine learning avoid asking this question altogether, arguing that it is philosophical, undefined and therefore not scientific (and inevitably touching this matter causes a mess). Instead they use the equivalent of duck typing - if it looks intelligent it is intelligent - a somewhat extreme extension of the Turing test. I disagree with this opportunistic approach, I think getting this definition right is crucial to the field, even if it means getting into another s%#t storm. In fact, if the argument by the machine learning people is that this discussion is not sufficiently formal and messy, I'd like to kindly suggest that it is their duty to formalize it, not to … Read more...
I'm taking a break from AI in this short post, it's time for something more general about the universe [see the last post in this category "what if we had a warp drive"].
In our daily activities we may not notice how lucky we are - we can see the sky. I mean the deep sky, even far beyond our Galaxy. And by looking at those things, we can learn that the Universe is expanding, that there are quasars, active galaxies, large scale cosmic structures, galaxy clusters, cosmic background radiation and many other marvels. We treat all that as obvious.
But imagine the Sun along with the solar system was trapped inside one of the dense nebulas, which there are countless numbers of in our Galaxy. Say we were trapped somewhere deep inside the Orion nebula. All we would see in the night sky would be the faint pink glow of hydrogen and maybe a few blurred stars shining through the fog.
And best of all, since the nebula is many, many light years across, we could do nothing to see beyond it. Absolutely nothing. Discovering anything about the outside universe would require sending a probe light years … Read more...
While rereading my recent post [the meta-parameter slot machine], as well as a few papers suggested by the readers in the comments, I've realized several things.
On the one hand we have Occam's Razor: choose only the simplest models for things. On the other hand we know that in order to build intelligence, we need to create a very complex artifact (namely something like a brain), that has to contain lots of memories (parameters). There is an inherent conflict between these two constraints.
Many faces of overfitting
If we have a model too complex for the task we often find it will overfit, since it has the capacity to "remember the training set". But things may not be so obvious in reality. For example there is another, counter intuitive situation where overfitting may hit us: the case where the model is clearly too simple to solve the task we have in mind, but the task as specified by the dataset is actually much simpler than what we had originally thought (and intended).
Let me explain this counterintuitive case with an example (an actual anecdote I heard from Simon Thorpe as far as I remember):
Figure 1. … Read more...
In my career I've encountered researchers in several fields who try to address the (artificial) intelligence problem. What I found though, is that researchers acting within those fields had a vague idea of all the others trying to answer the same question from a different perspective (in fact I had a very faint idea myself initially as well). In addition, following the best tradition of Sarye's law there is often tension and competition between the researchers occupying their niches resulting in violent arguments. I've had the chance to interact with researchers representing pretty much all of the disciplines I'll mention here, and as many of the readers of this blog may be involved in research in one or a few of them, I decided it might be worthwhile to introduce them to each other. Within each community I'll try to explain (at least from my shallow perspective) the core assumption, prevalent methodology, and the possible benefits and drawbacks of the approach as well as a few representative literature/examples (purely subjective choice). My personal view is that the answer to the big AI question cannot be obtained within any of these disciplines, but will eventually be found somewhere between them, and … Read more...
Today we'll step back a bit and consider the psychology of a machine learning researcher when he does his job, a subject which interests me deeply and one that I've already touched in another post. Some of this comes from my own introspection, as I've been doing machine learning for quite a few years now.
Emails and ML models trigger dopamine
It is a well known fact from biology that little achievements trigger the release of small amounts of dopamine - a neurotransmitter that is believed to be involved in reinforcement learning. The dopamine makes us feel good and also triggers plasticity in certain parts of the brain (likely allowing the brain to "remember" what behaviour lead to the reward). Reinforcement learning however has its issues, since the reward can appear by coincidence and therefore reinforce the "wrong cause". This is very much visible these days with Internet, emails and texts: since receiving an important and rewarding message reinforces the behaviour which lead to it - and that most likely was pressing "get mail" button - we get addicted to checking email! Same applies to social media, texting, and is also the mechanism underlying gambling. In reality rewards … Read more...