Myths and facts about AI

Given the overwhelming amount of excitement (and inevitable medial noise) I decided to make a concise summary of the state of technology (as of late 2016) in order to not go insane.

What is AI?

Artificial intelligence is a misleading buzzword and these days it is used for anything having to do with automation via computing. Generally it applies to a set of optimisation methods loosely connected with outdated theories on how the brain works.

AI must be close to being solved since recent progress shows that technological singularity is inevitable and close?

Singularity may or may not happen. As with any reasoning that extrapolates certain trends, there could be barriers that prevent these prophecies from ever materialising. If we were to extrapolate the distance travelled by humans in space between late 1940's to early 1970' and fit it with an exponential curve, we would have had to have sent astronauts to Jupiter by now. Clearly did not happen. Same with Moore's law and progress in computing. Although there was a period when computing power would double every 20 months or so, it is not clear if this still applies (comparing contemporary computers with those from say 10 years ago it is questionable) or how sustainable that trend is.

What is deep learning?

Deep learning is a name for a class of connectionist models with more than three layers. This may include MultiLayer Perceptrons or Boltzmann Machines etc. Many of the algorithms in question were conceived in late 1980's or 1990's but were recently rediscovered and implemented on contemporary GPU's and trained with much larger, contemporary datasets. The term "deep learning" is often used in exchange for "machine learning" (though arguably is a subset of it).

Deep learning has solved AI?

Artificial Intelligence is not even properly defined, not to mention solved. Deep learning had vastly improved certain specific problems in perception (computer vision) like identifying an object in the photo, given limited clutter and reasonable lighting. Deep learning had improved speech recognition and many other specific tasks which have a clear optimisation criteria. It is not clear what optimisation criteria does "intelligence" in the strong sense follow. So as much as there are impressive improvements, the big AI problem remains generally unresolved.

Computers can play Atari and Go better than humans, what is the significance?

Since the late 1990's computers could play chess better then humans. The significance is similar.  Ironically more than 20 years after IBM had solved the game of chess, there is still not robot available that could robustly move chess pieces on any chessboard in broad, unrestricted conditions. It seems that the task of moving pieces is a much harder task than the game of chess itself. (See Moravec's paradox)

Do we need to worry about ethics in AI?

We need to worry about ethics in every aspect of our technological progress. Any technology could be used in a malicious way and the dangers and limits should be discussed by qualified people, including philosophers. That being said, debating the social consequences of strong AI is rather premature and seems like debating overpopulation on Mars. This can be very well explored in Sci-fi literature (and has been already) but does not need any urgent action or regulation.

Will my job be displaced by a machine?

Progress in technology inevitably causes obsolescence of certain professions. Contrary to widespread belief I don't think that truck drivers are in the immediate danger of loosing their jobs, neither are I think Uber drivers. With trucks, even if the technology will allow to automate driving, there will be a need for a human onboard for any unexpected situations, much like we have pilots onboard planes, even though technically planes could have been autonomous years ago (and much of the flight time is on autopilot anyways). With taxi/uber drivers, my personal feeling is that a fully autonomous car that could go in any urban environment is further away than the general public thinks, at least 10, maybe 20 years away. Ironically "deep learning researcher" can be an extinct profession after the next AI winter, possibly much sooner than when the truck drivers get displaced.

Is there a danger of robots conquering humans?

Currently AI struggles when applied to robotics. The low level perception and mobility remain very hard as stated by Moravec's paradox, and even the best robots can barely open a door knob. So if you are afraid of AI robots and terminators, make sure to have your doors closed.

Do our current instances of AI work like the human brain?

We still don't know how the brain works (not only human but in fact any brain). Some of the current AI algorithms draw loosely from the neuroscience of the late 1970's (e.g. the concept of the neocognitron). Since then neuroscience did make a few discoveries which generally did not reveal how the brain works, but certainly showed that it does not work the way we thought earlier. Therefore to summarise: we don't know how the brain works, but we know almost for certain that it does not work the way our current AI does.

Is there an AI bubble?

Looking at the amount of media attention, the proliferation of startups and vast activity of the key researchers on social media I'd say yes. Big time.

I know nothing about machine learning but I want to start a project/startup that will build a general purpose household helping robot, will deep learning solve the brain part?

Respectfully, go and do something else.

 Algorithms are solved, now it's all about the data?

That is probably true in the context of supervised learning. But supervised learning is only one way of doing things, likely not the smartest. Some of the datasets used in training are rapidly approaching the data gathered by a lifetime experience of a human being, yet these algorithms are only performing specific tasks and are generally not grounded in reality. Consequently it seems we should be able to extract a lot more knowledge from similar amounts of data if we were doing the right thing. Hence, there is a huge opportunity to improve algorithms, particularly unsupervised learning.

Can computers understand visual scenes?

Generally the understanding is rudimentary. As I mentioned above, detection of an object in limited clutter and reasonably exposed photo is possible. General gist of the scene (say e.g. if is nature/urban/industrial setting etc.) can also be reliably estimated. However relations between objects in the scene, general context of the situation, many things we take for granted remain elusive. There are attempts on making systems that would describe the context of the image based on large amounts of annotated data (ontologies), but personally I don't think it is the right way to go. Much like building a coal fired rocket, you can make a lot of smoke, show good progress, burn a ton of money, but the thing is just not going to fly.

Is there an AI revolution?

Seems more like evolution to me: the algorithms changed little since the 1990's. The use of GPU's and increasing amounts of data helped applying these algorithms. There is currently a lot more hype than warranted by the actual progress, which may eventually lead to an AI winter.

Can AI be fooled?

As it turns out, yes, quite easily. Adversarial examples show how the best deep networks can be tricked into bizarre errors.  This is yet another example showing that the way we do perception right now probably has very little to do with how the brain does it.


If you can think of any other questions, shoot me an email at [email protected] or leave a comment.


If you found an error, highlight it and press Shift + Enter or click here to inform us.