AI and the ludic fallacy

Ludic Fallacy

I enjoyed Nassim Nicolas  Taleb books and like his style of calling out some of the - let's put it mildly - misconceptions in theoretical approach to economy. One of his key ideas is that of Ludic Fallacy, that is the use (abuse) of game analogies to real world situations. This fallacy stems from the fact, that since the reality is incomprehensibly complex we typically restrict the scope of research (or any other mental activity) to some model world - game - where the rules are all known (assumed). We then derive conclusions about some aspect of reality, forgetting that the conclusions were derived in the model world and the uncertainties as to whether that model world was accurate are inherited by those conclusions. For example: if I assume, based on previous cases, that given poll results indicate a particular candidate will win the election, I silently assume that nothing else fundamental has changed since the "previous cases" and the analogy can be drawn. But if something has changed outside of the model, then my prediction just as well can be completely useless (even if it has nice "confidence level" derived within the model). Recent US elections can be used as a nice real world example.

Artificial Intelligence

Now in the context of AI: we work with datasets, universes of discourse (classical term, not so much used since the times of Lighthill report), rarely exposing our models to reality. Datasets do originate from reality, but the Machine Learning results often remain in the dataset domain and don't percolate back to reality (see this interesting paper for reference).

Much like in the context of finance unexpected stock market crashes remind us how much our economic models are worth, the similar sobering reality check for AI is robotics. In robotics things need to work for real and a few percent error rate on a dataset may indicate a disaster. As it turns out, the hardest things in AI for robotics are the most basic perceptual and motor tasks, as summarised by the Moravec's paradox

"it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility"

I claim that this is the ludic fallacy right here: we can play games with AI but we can't equip a robot in even the most basic perceptual/motor capabilities.

Obviously the problem is that in the unrestricted robotic case we cannot make a ton of assumptions about what the reality will look like, therefore we cannot cheat the system - reason why most of the robots these days are found in the very restricted factory environments. Hans Moravec first stated his paradox in 1988, so many, many years ago (in computing and AI that is eons ago), so many readers may think that this is not applicable anymore. For those who do, I have taken the liberty of putting together a small clip (turn on the speakers for an even better entertainment):

The selection admittedly is a bit biased, but essentially that is it: state of the art robotics on the left (2015 DARPA challenge), state of the art "AI" on the right. Almost 30 years have passed since Moravec made his observation, computers became billions of times faster, cameras became millions of times more accurate, and... nothing has changed. Only now we can play a few Atari games and Go with "AI" using reinforcement learning.

AI winters

AI is known also for the periodic hype/bust cycles known as AI winters which to some degree resemble stock market crashes. There has been a few of those (see my other post for a more complete historical review). What is however common, is that the depression typically occurs once the AI hype wave gets out of the "universe of discourse" and smacks onto the rocks of reality. For the hype cycle of the 60's that reality was the inability to accurately translate from Russian to English. For the most recent bust that happend around 2003 it were the CAPTCHAs. What will it be this time?

Self driving car

My bet is that the self driving car will demolish the current AI hype. And I'm not talking about the assisted driving but full (level 5) autonomy, as only this makes the case for the gigantic investments made by numerous companies. Now don't get me wrong: I'd love to have one, my entire work is devoted to solving the fundamental problems that would allow for one. But at the same time, I'm astonished to see so many other people working in the field of AI, enclosed in their model domains not seeing the problem! The key observation is this: a self driving car is a robotic device operating in an unrestricted environment. We cannot possibly assume that roadways are restricted domains since in reality, literally anything can happen in the middle of the road. There are several other problems which I have previously discussed, but the fundamental one is that we keep building AI as statistical pattern matchers. AI can fundamentally only deal with the stuff it has seen before, cannot anticipate, identify outliers (new unknown things) and react appropriately.

Now that being said, I think the time is right to actually solve the appropriate problems and I've put forward a broad proposal on how to approach AI differently - in summary learn the stuff that is constant - physics - rather then try to memorise all the corner cases.. The problem is, once there is an AI winter, everyone doing it will get equally busted, even the whistleblowers like me.

 

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments

20 thoughts on “AI and the ludic fallacy

Comments are closed.