Learning physics is the way to go

In many of my posts I'm directly or indirectly postulating learning of the physics as a way to create a "real AI". The point I'm trying to makes is so obvious, that it actually is not obvious at all and it took me some time to realise it. As with many such obvious/non-obvious things it takes multiple angles before the essence could be captured, hence why I write this blog. I'm trying to express myself in many ways until I hit the explanation that everyone just simply gets. So let me try again in this post:

Complexity

The world around us is complex. Everything to some degree interacts with everything else, there are lots of regularities but there is also a fair amount of chaos. No two trees look identical, yet we manage to categorise them. In terms of physical language it appears that a good chunk of our reality is a "mixing system" at the "edge of chaos" (or otherwise critical). We therefore cannot very well predict what will happen. Yet I'm postulating prediction as a training paradigm, does this make any sense?

It does and here is why: even with the chaotic world, there are numerous aspects of … Read more...

More thoughts on the self driving car

Recently Tesla had shown a teaser video of their "self driving car" project which immediately drew media attention and swarms of self driving "enthusiasts" to again announce that this is a done deal already (which it is not). Here is the video in question:

Note: above video has been subsequently taken down, I'm now linking to a mirror.

Now this looks very impressive as a demo but there are a few details I'd like to point out before we start saying again that the self driving car is a done deal from technological point of view. Disclaimer: I do like Tesla and I think some of their ideas are great, but their self driving seems a bit premature, somewhat over promised and over hyped.

  • The lighting conditions in a video are perfect from computer vision point of view. Although it is a bit foggy, the illumination is uniform and diffused. There are no hard shadows, flares or ghosts.
  • The lane markings are all clearly painted and visible everywhere.
  • There are no "unusual situations" (see below what I mean by that).

Just a reminder that a self driving car was demoed as a research project in mid 80's at CMU … Read more...

Yann LeCun lecture at CMU, 11.2016, a few comments

Yann LeCun, the inventor of the convolutional networks has given a talk at CMU robotics institute which was conveniently recorded and made available to the general public here:

http://www.ri.cmu.edu/video_view.html?video_id=176&menu_id=387

Although the talk is over 1h long, it is certainly worth watching and I strongly recommend doing that before you read any of the following text.

After the lecture

Yann LeCun is a rather colourful character and certainly has strong opinions on many subjects. I find myself at any given time either strongly agreeing or strongly disagreeing with him and it's no surprise it is the same this time around. Anyway, he makes several points in his talk which I think are relevant to our published work on PVM (PVM paper for details) and worth more detailed comment.

  1. After a brief overview of the state of the art in machine learning and AI, LeCun goes on to talk about more cutting edge stuff. He notes that the next important frontier for AI is learning  "Forward Models" via prediction, learning "folk's physics" so to speak (a.k.a, common sense). He presents the observation that reinforcement learning has a very weak learning signal in the case of sparse rewards,
Read more...

Myths and facts about AI

Given the overwhelming amount of excitement (and inevitable medial noise) I decided to make a concise summary of the state of technology (as of late 2016) in order to not go insane.

What is AI?

Artificial intelligence is a misleading buzzword and these days it is used for anything having to do with automation via computing. Generally it applies to a set of optimisation methods loosely connected with outdated theories on how the brain works.

AI must be close to being solved since recent progress shows that technological singularity is inevitable and close?

Singularity may or may not happen. As with any reasoning that extrapolates certain trends, there could be barriers that prevent these prophecies from ever materialising. If we were to extrapolate the distance travelled by humans in space between late 1940's to early 1970' and fit it with an exponential curve, we would have had to have sent astronauts to Jupiter by now. Clearly did not happen. Same with Moore's law and progress in computing. Although there was a period when computing power would double every 20 months or so, it is not clear if this still applies (comparing contemporary computers with those from say 10 years … Read more...

AI and the ludic fallacy

Ludic Fallacy

I enjoyed Nassim Nicolas  Taleb books and like his style of calling out some of the - let's put it mildly - misconceptions in theoretical approach to economy. One of his key ideas is that of Ludic Fallacy, that is the use (abuse) of game analogies to real world situations. This fallacy stems from the fact, that since the reality is incomprehensibly complex we typically restrict the scope of research (or any other mental activity) to some model world - game - where the rules are all known (assumed). We then derive conclusions about some aspect of reality, forgetting that the conclusions were derived in the model world and the uncertainties as to whether that model world was accurate are inherited by those conclusions. For example: if I assume, based on previous cases, that given poll results indicate a particular candidate will win the election, I silently assume that nothing else fundamental has changed since the "previous cases" and the analogy can be drawn. But if something has changed outside of the model, then my prediction just as well can be completely useless (even if it has nice "confidence level" derived within the model). Recent US elections … Read more...

Predictive Vision in a nutshell

I've elaborated in my previous post on why I think predictive capability is crucial for an intelligent agent and how we get fooled by getting 90% of motor commands right from a purely reactive system. This also relates to a way of thinking of the problem in terms of either statistics or dynamics. The current mainstream (statistical majority) is focused on statistics and that statistically works. However much like with guiding behavior, statistical majority may omit important outliers - important information is often hidden in the tail of the distribution.

I've mentioned the Predictive Vision Model which is our (me and a few colleagues that think alike) way to introduce predictive paradigm into machine learning. It is described in a lengthy paper, but not everyone has the time to go through it, so I will briefly describe the principles here:

Idea

The idea is to create a predictive model of the sensory input (in this case visual). Since we don't know the equations of motion of the sensory values, the way to do it is via machine learning -  simply associate values of inputs now with those same values in the future (think of something like an autoencoder … Read more...

Reactive vs Predictive AI

This post is an extension of my previous post on Statistics vs Dynamics in machine learning. I'll try to expand here on what I think is the key missing ingredient (possibly not the only one) for efforts such as a self driving car or other robotic projects that are aimed at unrestricted environments.

The way the problem of control in machine learning is approached today is by end-to-end training of motor command based on sensory input (such as e.g. here).  The authors argue that the optimisation algorithm will do a better job than explicitly breaking down the task into perceptual/planning submodules because it can do everything at once. This logic is influenced by behaviourism and the observation that humans essentially appear to do the same thing - map sensory input onto motor command.

This approach is flawed, as I will try to explain in paragraphs to follow.

What looks like direct sensory to motor mapping maybe be a lot more complex

Looking naively at a human performing a task of let's say driving a car, one may think that the human performs the task of matching what he currently sees onto a motor command. This certainly looks like that, … Read more...

Statistics and dynamics

There are two very important branches of mathematics relevant for building intelligent systems: statistics and dynamics. The rationale is the following:

  • data has regularities and patterns that repeat, therefore an intelligent system should analyse them statistically
  • things in the world are in motion and that motion has regularities, therefore the intelligent system should build models of that dynamics

Although seemingly these approaches are very compatible, it is important to understand the different modes of thinking: statistics tries to find a pattern given we know nothing else about the system (often making assumptions that things come from a known distribution) based on many samples. Dynamics tries to write down the equations of motion of the system given very few samples. Statistics wants to estimate the expected value and variance of things. Dynamics wants to predict exact value of something with strict error estimate.

Current machine learning is heavily biased towards statistics. Although some priors are inserted into the models, the general approach is to throw more data and compute power at a system and expect miracles, rather than building a system that could intelligently infer based on the dynamics (see e.g. the ImageNet and similar purely statistical approaches to understanding images … Read more...