This post again about the definition of AI. It was provoked by a certain tweet exchange where it turned out again that everybody understand the term AI arbitrarily. So here we go again.
Introduction
Let us deal again with this fundamental question: what is and what is not AI. Determining which is artificial and which not is not the problem, the problem is determining if something is intelligent or not. This has confused, confuses and likely will confuse even very intelligent people.
Many application/research focused people, particularly in machine learning avoid asking this question altogether, arguing that it is philosophical, undefined and therefore not scientific (and inevitably touching this matter causes a mess). Instead they use the equivalent of duck typing - if it looks intelligent it is intelligent - a somewhat extreme extension of the Turing test. I disagree with this opportunistic approach, I think getting this definition right is crucial to the field, even if it means getting into another s%#t storm. In fact, if the argument by the machine learning people is that this discussion is not sufficiently formal and messy, I'd like to kindly suggest that it is their duty to formalize it, not to brag about it and patronize people who try to do it (even if those attempts are often unsuccessful). This subject has been a recurring theme of this blog, discussed in several posts e.g. [Intelligence is real], [Reactive vs Predictive AI], [AI and the ludic fallacy], [Outside the box] and many others.
Basic assumptions
Before I start let me clear the air by stating a few basic assumptions. I assume that intelligence is a natural, physical phenomenon, not anything supernatural - this will hopefully prevent this discussion from going into religion, free will, consciousness and all that stuff. As such, this natural phenomenon is exhibited by a variety of animals living on planet Earth, including but not limited to a particular species of primate apes who call themselves humans. Intuitively we can tell that there is a continuity of levels of intelligence, as we generally see a progression from smaller and simpler animals into larger and more sophisticated in their behavior. This progression roughly correlates with the size of their cognitive organ, the brain. However, lacking a precise definition of the phenomenon in question, it is hard to measure it quantitatively. Although we only clearly see intelligence manifested by biological beings, I have no prejudice against it being exhibited by other entities, including pieces of complex electronics, i.e I assume it is computable. Also for now, I don't assume any extra-classical foundations of intelligence, such as quantum computation etc. I'm fine with it being a purely mechanistic and classical phenomenon, though likely exhibited only by very complex systems.
I will accept a definition of intelligence if it will allow to clearly separate non-intelligent from intelligent. A definition which taken ad absurdum recognizes everything or nothing as intelligent is not useful. A good definition has to hold some separating power, even if pushed to the very limit.
Testing the Turing test
In 1950 Alan Turing came up with his imitation game test. The base of the test is to use humans (assumed to be intelligent beings) to judge whether another entity is intelligent or not based on the exchange of written messages, seemingly avoiding any prejudice pertaining to the actual implementation of the opponent. Essentially this is duck typing: if it looks intelligent in conversation it is intelligent, no matter what is inside. This test is pretty strong, since arguing against it leads to solipsism. If one questions the intelligence of an entity passing the Turing test, one is questioning intelligence of any entity besides himself. That however does not mean that this test is particularly useful and flawless. Let me go over several issues.
First of all, Turing assumes that intelligence is a verbal quality. The entire test is based on exchange of verbal messages and therefore will be failed by every animal on Earth except for humans (and likely even quite a few humans would fail). This is not very good, since it is a binary test - human level verbal intelligence or nothing. Even more, Turing test assumes the same cultural context, a human raised in a different culture with different basic philosophical priors than the judges is likely to fail, just because he will likely return "strange responses". Such all or nothing test does not provide a good gradient, based on which we could optimize.
Secondly, Turing test is an imitation game! It's very definition invites all sorts of smoke and mirror masters to manufacture their "intelligent" chat bots. Much like an illusionist may fool people into thinking that he materializes a rabbit in his hat, a smart researcher can fool human judges into thinking that they are dealing with a real thing (at least for a while). Ultimately, the truth will prevail, but much like a magic show, Turing test can only last for a finite amount of time. So ultimately the game is not to fool humans indefinitely, but just long enough to pass the test. Even though it is a game involving an intelligent adversary, it is still a game.
Perhaps the basic idea for the test is OK - it is essentially sampling behavior against some physical adversary - but maybe instead of taking a verbal interaction with a human as an adversary, Turing should have chosen something else. Hold on to this though as I'll get back to it later.
Intelligent means learning but not vice versa
If it's not the Turing test, then we can list a whole litany of things that intelligent things do and try to cover the subject from that direction. Above all, intelligent beings learn from and adjust to their environment, hence learning is an important attribute. But learning alone can be taken ad absurdum - it is essentially adjusting some internal variables based on external stimuli. By that token a thermostat is an intelligent being, and an artificial neural network optimized by any algorithm can be deemed intelligent. In fact any control process involving any sort of optimization over time could be considered intelligent under this definition. So this is not a useful definition, as it covers almost everything when pushed to the limit.
Games vs reality
What if a computer not only it learns, but also exhibits a complex behavior in a game setting, ultimately being able to win said game. Intelligent beings clearly like playing games; cracking many games appears to require intelligence. Everyone heard of the great breakthroughs of AI: in 1997 IBM beating Kasparov in chess and in 2016 Deep Mind beating Lee Sedol in go. Clearly both gentlemen are intelligent, likely extraordinarily intelligent and now a computer can beat them at their game! The immediate conclusion is that the computer is intelligent.
It is subtle but I'll try to tear this illusion apart. Let me start with something simpler: say pong. Clearly if we see a dog playing pong, we will acknowledge that as a demonstration of intelligence. But when we are presented with say 20 lines of python code that can play pong we would justifiably express doubt. So what is up with those games?
Here is my take on it: The inherent property of games is that they are formal systems with relatively concise descriptions (rules). Such a formal system taken in the context of a physical entity (animal that typically lives in physical world) requires a substantial mental activity from the cognitive organ. Only sufficiently "intelligent" individuals excel at such games. But the formal nature and concise description means that the game is limited only to the states reachable via formal reasoning (everything is within a well defined box). Because of this pure, crystalline structure, there often exists a concise strategy for playing such a game, and that strategy can be expressed as a relatively short computer program (whether that program is typed in by a programmer or discovered via some sort of optimization is not important). The program is obviously not intelligent, often not even complex.
So what is different? All the animals we consider to exhibit intelligence are actually not placed in a game, but in physical reality. And irrespective of multiple attempts, it seems that reality does not have a concise formal description and therefore likely does not have a concise, mechanistic winning strategy. Reality is not a formal system! The program playing go or chess does not need to worry about all those things a physical player needs to worry about!
Now this is the most important sentence in this paragraph: physical reality is not a formal system with a concise description. Although we try to cover aspects of reality with formal models, and at various specific scales and in particular settings such descriptions can be very successful (e.g. Newtonian mechanics for a wide range of motion, Maxwell field equations for propagation of electromagnetic waves, less so fluid dynamics for weather prediction), there is no and likely there will never be a complete, concise, description of reality at all scales.
This is like describing to a programmer a general game, but with the caveat that the rules may be subject to a random change at any point of time. Now in order to have any hope of winning, one has to write a program that will realize the rules have changed (in a direction we cannot predict) and design a new strategy (to play a game we cannot even anticipate at present): that is fundamentally more difficult. Ability to play such "game" is the closest I can think of to a definition of intelligence.
That brings us to the thing called reality (defined as this nasty stuff that refuses to disappear when one stops believing in it), and reality inherently involves physics.
Thermodynamical approach
If intelligence is the ability to play the physical game, where context is changing all the time, thereby redefining the rules (to be clear, not the basic laws of physics, but rules stemming from the high level organization), then what is the objective? When does the agent win? To get a grasp of that we need to step back to a broader view of our eco-system: all the animals we know to exhibit intelligence are subject to evolution by natural selection. What is the objective of evolution? There is none. But the objective of a set of genes carried by an animal is to survive and reproduce (it is the selected "objective" only because all the sets of genes that had "different objectives" are long extinct). Anyway, in that context an agent is successful roughly if it survives long enough to raise an offspring. Surviving in a complex environment with a bunch of co-evolving predators and other dangers clearly requires ability to perceive and predict aspects of the environment. Can this be further formalized?
As I already discussed in the post [intelligence is real], one way to define a sensible objective was proposed in the paper by Alex Wissner-Gross and Cameron Freer [see TED talk for quick summary]. They propose that intelligent system needs to optimize future causal entropy, or to put it in plain language, maximize the future choices. Which in turn means minimizing all the unpleasant situations with very few choices. This makes sense from evolutionary point of view as it is consistent with the ability to survive, it is consistent with what we see among humans (collecting wealth and hedging on multiple outcomes of unpredictable things) and generates reasonable behavior in several simple game situations (as presented in the paper and the TED talk).
What would one need if that were indeed the goal? Clearly to optimize for future choices, one has to have some insight into the future. The future in general is not predictable (partially because of the lack of a concise formal description), but at least some aspects of reality are predictable and regular. Therefore it appears appropriate to predict whatever can be predicted and then act on the remaining prediction error as the most important signal which bifurcates the branches of possible future outcomes. This in turn is consistent with the predictive brains idea, which in turn is consistent with a boatload of evidence from neuroscience [see references in the paper].
So to summarize, if we are playing the game where rules are not set, the best strategy is to learn to predict what can be predicted and act based on the remaining error signal. I use this as my working definition of intelligence. Artificial Intelligence is therefore any synthetic system which can exhibit this behavior. This definition appears to be good, since it is not biased towards human kind of intelligence and includes non verbal animals (unlike Turing test) and certainly does not include everything else such as thermostats (unless they are predictive thermostats!).
Summary
Going back to games, although the rules there are set and concise for us, the agent acting within the game may have no access to those rules and therefore act intelligently within the game. Clearly this is some weaker form of intelligence, let me then try to put together some hierarchy:
- Elementary intelligence - optimizes causal entropy within a very limited and stable ecosystem. This could include simple predictive-control systems.
- In game intelligence (or ludic intelligence) - optimizes for a reward in a game setting. This process ultimately results in a concise optimal strategy.
- Simple intelligence - survives in a relatively simple, stable or restricted ecosystem. Most simple animals such as amphibians and reptiles fit within this category. Some artificial robotic systems may begin to fit here, such as perhaps some more sophisticated driverless prototypes.
- Sophisticated intelligence - survives in a broad variety of ecosystems, including transitions from one ecosystem to another - most animals with well developed neocortex will fit here, including almost all mammals and possibly some cephalopods.
- Verbal intelligence - animals which can verbalize their internal states and use symbolic representations. They use tools to survive in a very wide range of environments. E.g. humans and to a lesser degree some other primates, possibly some whales or elephants.
This is roughly how I see it. Our current technical ability allows us to build "in game intelligence" pretty well, and we are slowly entering the area of "simple artificial intelligence" with some autonomous robotic systems (such as e.g. driverless cars). We are probably years away from "artificial sophisticated intelligence" and probably decades away from "Artificial verbal intelligence" (don't confuse this with formal language based AI, as I consider this to be in-game AI). Note that I don't put here explicitly any artificial neural networks: I consider them to be a tool for achieving intelligent behavior (possibly a crucial tool), but other than that they are not related. Moreover I'd say that at least 90% of neural network literature has nothing to do with AI whatsoever (more so with statistical/numerical optimization).
The point here is that instead of putting a human to judge if something is intelligent or not, put the physical reality itself. This is very much related to autonomy and embodiment. If something survives in physical reality, finds novel ways out of trouble and does not get stuck on any smallest detail that was not anticipated and preprogrammed by its maker, it can be deemed to some degree intelligent. The DARPA robotics challenge shows us with brutal honesty how far we've actually gotten in AI defined by such a metric.
Bottom line: next time, before you call your rectified matrix product AI, explain that you mean AI as in advanced informatics.
If you found an error, highlight it and press Shift + Enter or click here to inform us.