In recent weeks I've been forced to reformulate and distill my views on AI. After my winter post went viral many people contacted me over email and on twitter with many good suggestions. Since there is now more attention to what I have to offer, I decided to write down in a condensed form what I think is wrong with our approach to AI and what could we fix. Here are my 10 points:
- We are trapped by Turing's definition of intelligence. In his famous formulation Turing confined intelligence as a solution to a verbal game played against humans. This in particular sets intelligence as a (1) solution to a game, and (2) puts human in the judgement position. This definition is extremely deceptive and has not served the field well. Dogs, monkeys, elephants and even rodents are very intelligent creatures but are not verbal and hence would fail the Turing test.
- The central problem of AI is Moravec's Paradox. It is vastly more stark today than it was when it was originally formulated in 1988 and the fact we've done so little to address it over those 30 years is embarrassing. The central thesis of the paradox is that apparently simplest reality is more complex than the most complex game. We are obsessed with superhuman performance in games (and other restricted and well defined universes of discourse such as datasets) as an indicator of intelligence, a position coherent with the Turing test. We completely ignore the fact that it is the reality itself rather than a committee of humans that makes ultimate judgements on the intelligence of actors.
- Our models may even work, but often for the wrong reason. I've elaborated on that in my other posts [1], [2], [3], [4], deep learning comes in as a handy example. We apparently solved object recognition, but numerous studies show that the reasons why deep nets recognize objects are vastly different from the reasons why humans detect object. For a person concerned with fooling humans in the spirit of the Turing test this may not be important. For a person who is concerned with the ability of an artificial agent to deal with unexpected (out of domain) reality this is of central importance.
- Reality is not a game. If anything, it is an infinite collection of games with ever changing rules. Anytime some major development happens, the rules of the game are being rewritten and all the players need to adjust or they die. Intelligence is a mechanism evolved to allow agents to solve this problem. Since intelligence is a mechanism to help us play the "game with ever changing rules", it is no wonder that as a side effect it allows us to play actual games with fixed set of rules. That said the opposite is not true: building machines that exceed our capabilities in playing fixed-rule games tells us close to nothing about how to build a system that could play a "game with ever changing rules".
- There are certain rules in physical reality that don't change - these are the laws of physics. We have verbalized them and used them to make predictions that allowed us to build the civilization. But every organism on this planet masters these rules non verbally in order to be able to behave in the physical environment. A child knows the apple will fall from the tree way before it learns about Newtonian dynamics.
- Our statistical models for vision are vastly insufficient as they only rely on frozen in time appearance of things and human-assigned abstract label. A deep-net can see millions of images of apples on trees and will never figure out the law of gravity (and many other things which are absolutely obvious to us).
- The hard thing about common sense is that it is so obvious to us, it is very hard to even verbalize and consequently label in the data. We have a giant blindspot that covers everything which is "obvious". Consequently we can't teach computers common sense, not only because it would likely be impractical, but more fundamentally because we don't even realize what is it. We don't realize until our robot does something extremely stupid and only then an eureka moment arrises - "oh it does not understand that ... [put any obvious fact of choice here] ...".
- If we want to address Moravec's paradox [which in my opinion should be the focal point of any serious AI effort today] we somehow need to mimic the ability of organisms to learn purely from observing the world, without the need of labels. A promising idea towards achieving this goal is to build systems that make temporal prediction of future events and learn by comparing the actual development with their prediction. Numerous experiments suggest that this is indeed what is going on in biological brains and it makes a lot of sense from numerous perspectives, as these systems, among other things would have to learn the laws of physics (as they appear observed by the agent, aka. folks physics). The predictive vision model is a step in that direction but certainly not the last step.
- We desperately need to frame the quality of "intelligence" outside of the Turing's definition. Promising ideas arise from non-equilibrium thermodynamics and are consistent with the predictive hypothesis. We need that because we need to build intelligent agents that will certainly fail the Turing test (since they will not exhibit verbal intelligence) and yet we need a framework to measure our progress.
- Almost all that we do today and call AI is some form of automation of things that can be verbalized. In many areas this may work, but is really not very different from putting Excel in place of a paper spreadsheet to help accountants. The area which is (and always was) problematic is autonomy. Autonomy is not automation. Autonomy means a lot more than just automation, and it means a whole lot more if it is autonomy that is required to be safer than humans, as in self driving cars. Autonomy should almost be synonymous with broadly defined intelligence as it assumes ability to deal with unexpected, untrained, proverbial unknown unknowns.
These are the core points I'd like to convey. They have various nuances, hence why I write this blog. However certainly if you acknowledge these points, we are pretty much on the same page. There are other numerous details which are heavily debated which I don't think are essential but for completeness let me express my views on a few of those:
- Innate or learned? Certainly there are organisms with innate capabilities and certainly there are things we learn. This is however an implementation related question and I don't think it has a definite answer. In our future development I'm sure we will use the combination of both.
- Learned features or hand crafted features? This is a related question. My broad view is that vast majority of aspects of the "cortical computation" will be learned, that is in the context of AI and autonomy (but that does not mean we can't handcraft something if it proves to be useful and otherwise hard to learn for some reason). There are also huge pieces of brain that are most likely pre-wired. In more specific application of automation, things can go both way. There are cases in which learned features are clearly superior than hand crafted ones (the whole sales pitch of deep learning), but there are numerous applications where carefully handcrafted and developed features are absolutely, unquestionably superior to any learned stuff. In general I think it is a false dichotomy.
- Spiking, continuous, digital or analog, maybe quantum? I don't have an extremely strong position on that, each has advantages and disadvantages. Digital is simple, deterministic and readily available. Analog is hard to control but uses far less power. Same with spiking though that has the added benefit of being closer to biology which may suggest that for some reason this is the better solution. Quantum? I'm not sure there is any strong evidence for the necessity of quantum computation in solving intelligence, though we may find out it is necessary as we go. These are all questions about "how?". My main interest is in the question of "what?".
Since I want to keep it short (it is already too long) I'll stop here. Feel free to give me feedback in comments, emails and on twitter.
If you found an error, highlight it and press Shift + Enter or click here to inform us.