So we are trying to build Artificial Intelligence. But what is it? Is a program playing chess or go intelligent? After some though I think most people would agree that not really. It's just a computer program that managed to master a game. Is a large neural network -- optimised with gradient descent to approximate a dataset -- intelligent? Well, it is just a function approximator so technically I would say no. All these exercises do capture some aspect of what we would call intelligence, but the core of this idea seems elusive.
So why all the fuss about Artificial Intelligence?
A bit of history
The term "Artificial Intelligence" was coined by Prof. John McCarthy for the famous Dartmouth Conference in 1956. By his own words he had to invent something to get the funding. Since the very origin this term caused controversies and boom-bust iterations known as AI winters, among which the better documented ones are the LightHill report in 1974, Minsky and Papert book Perceptrons in 1969 (which busted the connectionist studies for quite a while), the 1987 collapse of expert systems (predicted by Minsky and Schank), and more recent smaller crisis in Backpropagation powered neural networks once people realised the vanishing gradient problem in the 90's .
The inherent property of the AI booms is the enormous enthusiasm they create, particularly among the people who have no idea how these systems work and what their limitations are (like venture capitalists or government officials for example). The visions are typically very romantic: automatic translation of millions of phone calls, visual perception, cheap and capable robots, natural language communication with computers and more recently self driving cars (which are a form of autonomous robots). Who would not like to have these wonders? Notably there is a clear incentive to create hype: researchers need to get the research money. The best way to do it is to scare somebody in the government that SkyNet is about to be born (in some other country) therefore AI research needs the dime. Entrepreneurs need to convince VC's so they use a similar strategy. All that is quickly picked up by journalist, since the public loves the stories about killer AI and terminator. Eventually everybody starts jumping on the AI bandwagon.
So here is what we've got: a field with a sexy name that no one really understands which promises wonders beyond imagination. What could possibly go wrong?
What is intelligence?
Now on a serious note, can we figure out what intelligence even is before we start building it? This question has been a subject to heated philosophical debate for years mixed up with things like consciousness, free will and so on. I'm not going to go that route in this essay. Instead let us back off a little bit and conceptualise some recent developments as well as analyse what intelligence is not.
In a recent paper, authors A. D. Wissner-Gross and C. E. Freer try to view the problem of intelligence from a somewhat novel point of view - thermodynamics. This may sound scary, but their result can actually be relatively easy conceptualised:
"Intelligent behaviour is to maximise the available future choices"
Let's contemplate this statement for a bit and see what it means in the context of humans and animals. To do so, let us first see what it means to get yourself into a situation where there are not many choices (options). Such situation is typically dangerous and very unpleasant. For example: you might be in the middle of a desert without water. Your choices are slim. You might be in jail. You might be sitting by the wall in front of a raging crowd. You might be an animal surrounded by predators. Etc... What all of these situations have in common is that unless the one of very few options remaining get you out of there you are dead. Also you might be whispering to yourself "how stupid I was to get myself into this in the first place".
By looking at these negative examples, it is clear why natural selection would promote building an organ (brain) that would help individuals avoid such dangers. Avoiding them meant survival and (potentially) reproduction. Those who did not have enough "imagination" to "predict" what might happen were erased from the gene pool.
Notice the words I emphasised in the paragraph above. We will often admire people with rich imagination (sophisticated model of reality in their heads) or those who could predict certain things we could not. Such people are often deemed "intelligent". Those who died or injured themselves in some stupid (aka easy to predict) situation are considered dumb, which is the opposite of intelligent. Note that this meaning of intelligence has nothing to do with being able to play games or do math olympiads.
In fact there are people out there - savant syndrome - who are often capable of amazing things in very narrow domains. Otherwise they are hopeless and need constant attention. They lack common sense or even basic understanding of what surrounds them.
Is artificial intelligence an artificial savant?
So let us go back to AI. Looking back at what we have built so far, one may get the impression that we build an artificial savant. A computer which is able to make a perfect move in chess (or Go) when the rest of the room is on fire (after a brilliant quote by Anatol Holt).
No wonder then, our best robots cannot support them selves to avoid falling when opening a door knob. These robots have the brains of idiot savants (even though some undoubtably smart guys had been working on them). They can barely move and understand very little of the reality, but could likely play chess very well. Now the problem is, you don't really want your driver to be a savant. You don't want your personal robot helper to be a savant. You might be fine when your spreadsheet is a savant. You might be fine when the content filter on your wifi hub is a savant. Your dishwasher could be a savant as well.
Now one could bring the argument of the AI effect against what I'm saying: I'm just discounting the accomplishment of contemporary AI by saying it is not real intelligence. And indeed to some degree I do. What I'm more concerned about though is that this argument could be abused on both sides exactly because we cannot define what intelligence is and therefore we have no idea what are we trying to build. And I'm not prejudiced about computation and so on. I don't attribute any magic to what happens in my brain, it likely is a form of computation (connectionism is dear to my heart). There is no magic there I suppose. So my argument is not to say that computation is something lesser to intelligence. In fact I consider other humans as agents with a biological computer in their head, not God's children infused with a magical soul.
All I want is the butler robot that was promised 60 years ago, not another world Go champion savant. Until then AI may obviously deliver the "wonders" that no one asked for and declare success, but the reality is that it will just be bullshit.
How to build real intelligence?
It will be a while before that happens, particularly if we continue on the path we are on right now. What really has to be done is to build a system which can understand its environment and itself enough to be able to predict and avoid dangerous situations (in real world with its full complexity). Initially such entities will not have the marvellous cognitive abilities to play chess or go. They will not even be able to do simple math or even speak. Think more of something a 1 year old child could do. You might think that a 1 year old gets in trouble all the time, but that is not really the case - they do get in trouble in an extremely complex environment designed for adults. Before there were cars and machines or electric outlets small children did just fine, often not attended. By the age of five they can survive even in our contemporary complex environments. The day we build a robot which will be able to handle such environments robustly, get some simple goal accomplished in many varying conditions (e.g. find a bathroom in a number of different homes), will be the day when true intelligence (in an artificial body) is born.
Dangers for AI research
The only danger I see for and from the current AI research is that we are quickly running out of games that we could play with "AI" that would make news headlines equivalent to chess or Go. When that happens there will be a big crisis in the field. No more news stories to build up the narration about SkyNet. Just a bunch of hopeless robots that can't handle a stack of toys on the floor.
- I acknowledge Todd Hylton who's been promoting the phrase "Intelligence is real" and had it even printed on T-shirts.