The Elephant in the Chinese room

There is an ancient argument in the field of AI called the Chinese room experiment. The thought experiment proposed by John Searle in the early eighties goes as follows:

  1. You put somebody who does not know Chinese in a room
  2. You give them a lengthy instruction (a program) on how to respond to given Chinese symbols
  3. Finally you run the experiment by feeding in Chinese sentences in the input and getting sentences at the output. The Chinese fellows are convinced they are running a conversation with a sentient being but the poor guy inside just shuffles symbols and has no idea what is he conversing about

The conclusion is that even though the external observers assume (by Turing test) that they are observing intelligence, the guy inside is clearly unaware of what is going on, and therefore the intelligence is somehow unreal.

Personally I have several issues with that experiment. First of all it is a thought experiment and it assumes we can have externally recognised intelligence implemented by a guy with a book of symbol transformations. Although a computational in/out relation like that should be implementable by a "computer", the size of the necessary derivations could be enormous. In other words the transformations done by the individual inside could be so long and complex (requiring substantial memory/scratchpads) that even though the gentleman inside would just be executing rules, the resulting "representations" would be sufficient to call the room as a whole "intelligent". The second assumption is that the fellow inside has no clue what is going on. Likely after enough interaction with the symbols he would just simply learn Chinese and consequently the whole argument collapses: the observed intelligence just collapses to the natural intelligence of the gentleman inside.

That being said the argument raises an important issue: there seem to be different "depths" at which behaviour could be intelligent, and people are sensitive to certain "shallow" instances of AI.

The problem is actually quite fundamental and related to knowledge grounding -  although we can create chat bots that act "intelligent" in a few simple sentences, it takes a short conversation with a perceptive human to reveal that their knowledge representations lacks many of the seemingly obvious connections.

Similarly with visual applications: we have systems that can seemingly perceive objects in pictures, but adversarial examples show that their "visual understanding" is substantially shallower than ours.

Now I would not go as far to say that this indicates that AI could never be achieved. It rather indicates that we are not there yet. In fact we will know we are there when an AI system will be able to regularly reveal shortcuts in our own thinking (as can a "smarter" human do with a "dumber" human).

Many of the problems begin with the fact that AI's are built in disconnect from physical reality. Chat bots are built out of grammatical rules and ontologies. Vision systems are extracted from a set of labeled pictures, where the labelings already reflect the complex human centric assumptions about objects (which are themselves a human concept based on more fundamental concept of affordance). In other words object is something that we can imagine isolated from its surroundings and that we could act on. But that definition itself is hazy and requires additional context. Speech recognition systems rely on phonemes which are again just a very rough way of isolating and localising relevant parts of speech.

We can readily distinguish systems raised in the world of such simplifications from an entity raised in the real world, where stuff is really, really, really complex  and not necessarily explicitly subdivided into categories that we choose to subdivide it with. The reason why artificial intelligence seems artificial and dumb (even if it can play chess or go) is exactly because of that lack of interaction with reality.

To some degree this deficiency could be seen in certain cases with humans as well. People who spend years in academia focusing on particular aspects of some phenomena may eventually loose the connection with the actual phenomena. That is why in scientific method it is imperative to compare the theory with experiment and following Popper's observations on falsifiability, the theory should always be treated as incomplete and unprovable (aside from math of course which has its own, specific methodology). Theory is essentially something that is almost certainly too simple, to the point of being wrong, but just happens to be the best thing we have so far the explain the experiment. Building unfalsifiable bits into theory to protect it from experiment is unnatural and unintelligent and eventually leads to chasm between the world of academia and reality. The reality always wins -- eventually.

Going back to AI, the only way to build a more natural AI is to let it interact with the same reality that we are exposed to and the only way to do it is to have it embodied in robots. Allow it to discover the same (or similar) unnamed regularities. Until then AI will seem "academic", unnatural and similar to what the Chinese room argument tries to ridicule and even calling it "deep learning" will not remove the apparent "shallowness".

 

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments

2 thoughts on “The Elephant in the Chinese room

Comments are closed.