When you are 80% there means you are not there

Apparently we live in the world where singularity is about to happen and artificial intelligence (AI) will cover every aspect of our lives. But the field of AI had always been inflated by bubbles and busts known as AI winters. Why is it so and is this time different?

Human psychology

There are several weaknesses of human psychology that make us very susceptible to hype in AI. First of all, we should note that humans have amazing perception, particularly visual perception. The problem is that great majority of our marvellous vision develops by the age of 2 and so neither of us remember what it's like to not perceive the world correctly. By the time we begin to verbalise (and remember anything), all the low and mid level perceptual machinery is up and running. So our psyche wakes up in a world where everything already makes sense and what needs to be learned and achieved are the higher cognitive tasks.

This phenomenon is reflected in our approach to AI. We tend to believe that artificial intelligence is about playing chess or go (or atari) because that is the kind of higher cognitive task that we are excited about by the time we reach production age. We completely forget that there is the unsolved world of low level perception, because perception just seems trivial. This is particularly visible in the field of robotics, where indeed human concepts embedded in computer code have to collide with reality. The current state of the art in robotics can be summarised by the results of DARPA robotics challenge, a few highlights here:

Each one of the robots shown is a machine worth hundreds of thousand if not millions of dollars with a huge research team behind it. Each of these robots is actually controlled by a restricted connection, not even fully autonomous. This is the state of the art. Even with all the LIDAR sensors, blows and whistles these machines are hopeless in unrestricted environment. They can't understand the reality surrounding them, they can't creatively use the terrain to support themselves etc.

In 1988 robotics researcher at CMU Hans Moravec noted:

"it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility"

Nothing confirms this paradox better than the 2015 DARPA challenge contrasted with recent advances in training AI to play Atari games and Go.

Diminishing returns

Another cognitive bias that people are  susceptible to is the diminishing returns. The reason why people may have a hard time understanding this simple principle is that at its core it's nonlinear and people appear to have problems understanding nonlinear stuff. The statement is as follows: you invest X you get Y. You invest 2*X and expect 2*Y but instead get Z which is much smaller than 2*Y.

In the context of AI and machine learning this is exemplified by the race for more data. Researchers are training their model and achieve 80% performance with 1GB of training data. Next thing they do is to get 2GB of data. But they only get 85% performance. Before you know it, you have a team of researchers crunching 100GB of data to get some marginal improvement of 1-2%. In some applications that 1-2% may be meaningful, but the last 20% of performance is a long and a difficult path.

For robotics and applications requiring autonomous decision making of any appreciable scope things cannot just work 80% of the time. Even 95% is not acceptable. What you really need is 99.9999% reliability or more. For example imagine a self driving car that runs 24/7. It is enough if it does something ridiculous for a single second. At high speed or at densely populated environment it is often enough to be wrong for just 1s in order to get yourself into unrecoverable trouble. There are 604800 seconds in a week. If we feel comfortable with a self driving car that makes a ridiculous mistake 1s in every week on average  (I'm not sure I'd like to be driven by such a car) we get 0.9999983465608465 required reliability. That is a tough call.

Now that being said, there are a bunch of non critical applications in which 80% or 90% is just fine. Particularly if any previous method only got say 60% or worse. But it's important to realise, that for any application that will act in the real world,  interact with real people, with the real unpredictability, with the real randomness and bizarre situations, it has to be really good.

Anthropomorphisation

Imagine a car driving on a freeway. Most of the time, all it has to do is follow the lane and keep a distance to a vehicle in front or just keep the speed. Easy. In fact really easy. Now if you drive from LA to Vegas this easy mode may actually encompass say 70% of your trip. You don't need your brain for this, in fact simple algorithms are capable of solving this part. Now the place where you do need your brain are the corner cases. Ranging in complexity from a traffic jam and people doing weird things, to construction zones and workers directing traffic in a strange way, to animals or people running through the road, to fires and emergencies on the side of the road, to some really bizarre stuff I can't even anticipate (e.g. volcano blowing up or something).  For some of these cases you really need a brain to understand what is going on and how you should react to save your life.

Now the problem people have is antromorphising the machines/computers they are dealing with. If you just had your autopilot on for a few hours and it dealt fine with certain number of easy/medium easy situations you begin to expect that it will deal properly with more complex stuff as well. You start attributing intentionality to its actions. The more you drive with it, the more of your own cognitive abilities will be projected onto the device. It is really easy to convince oneself that this device can actually understand the reality just as well as you do. It does not. Not even close. It may have better sensors and no blindspots but in terms of understanding the broader context of the situation it is completely clueless (as of 2016 at least).

Beware of this projection. It is a really powerful illusion and may cost you your life as it did one unfortunate Tesla driver.

The last 10%

The last 10% of performance is a horrible place. This is the place where a bunch of people made a lot of promises on emerging technology. This is the place where you begin to realise that getting more data is just not enough. This is the place where you realise that your algorithm will just not make it. But you are here because you already invested an incredible amount of effort into it (and others did as well, you don't want to let them down). And that is another problem with human psychology: sunk cost fallacy. You feel that since you already spent all that time/resources and you are so close (which is just number illusion since really the 80% below does not matter much) you just have to keep doing it. So you keep doing it. Until things crash.

The irony

That is why AI fails in big busts. People get driven by its romantic promises and then fall victims to their own psychological illusions.  It is ironic that our quest for artificial intelligence, the holy grail, the crown achievement of humanity, full of the smartest people out there uncovers many of our own basic weaknesses. The weaknesses of our own perception, the inability to clearly see what is hard and what is simple. The inability to perceive simple nonlinearity and finally the inability to escape the addicting process of beating a model which no longer shows improvement (particularly in the context of social pressure and hype).

It this time different? I don't think so. Not at least until we start solving the basic problems of perception and mobility. Solving more and more games and artificial benchmarks can certainly keep us busy for years to come, but will not make a difference.

 

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments