Yann LeCun lecture at CMU, 11.2016, a few comments

Yann LeCun, the inventor of the convolutional networks has given a talk at CMU robotics institute which was conveniently recorded and made available to the general public here:

http://www.ri.cmu.edu/video_view.html?video_id=176&menu_id=387

Although the talk is over 1h long, it is certainly worth watching and I strongly recommend doing that before you read any of the following text.

After the lecture

Yann LeCun is a rather colourful character and certainly has strong opinions on many subjects. I find myself at any given time either strongly agreeing or strongly disagreeing with him and it's no surprise it is the same this time around. Anyway, he makes several points in his talk which I think are relevant to our published work on PVM (PVM paper for details) and worth more detailed comment.

  1. After a brief overview of the state of the art in machine learning and AI, LeCun goes on to talk about more cutting edge stuff. He notes that the next important frontier for AI is learning  "Forward Models" via prediction, learning "folk's physics" so to speak (a.k.a, common sense). He presents the observation that reinforcement learning has a very weak learning signal in the case of sparse rewards,
Read more...

Myths and facts about AI

Given the overwhelming amount of excitement (and inevitable medial noise) I decided to make a concise summary of the state of technology (as of late 2016) in order to not go insane.

What is AI?

Artificial intelligence is a misleading buzzword and these days it is used for anything having to do with automation via computing. Generally it applies to a set of optimisation methods loosely connected with outdated theories on how the brain works.

AI must be close to being solved since recent progress shows that technological singularity is inevitable and close?

Singularity may or may not happen. As with any reasoning that extrapolates certain trends, there could be barriers that prevent these prophecies from ever materialising. If we were to extrapolate the distance travelled by humans in space between late 1940's to early 1970' and fit it with an exponential curve, we would have had to have sent astronauts to Jupiter by now. Clearly did not happen. Same with Moore's law and progress in computing. Although there was a period when computing power would double every 20 months or so, it is not clear if this still applies (comparing contemporary computers with those from say 10 years … Read more...

AI and the ludic fallacy

Ludic Fallacy

I enjoyed Nassim Nicolas  Taleb books and like his style of calling out some of the - let's put it mildly - misconceptions in theoretical approach to economy. One of his key ideas is that of Ludic Fallacy, that is the use (abuse) of game analogies to real world situations. This fallacy stems from the fact, that since the reality is incomprehensibly complex we typically restrict the scope of research (or any other mental activity) to some model world - game - where the rules are all known (assumed). We then derive conclusions about some aspect of reality, forgetting that the conclusions were derived in the model world and the uncertainties as to whether that model world was accurate are inherited by those conclusions. For example: if I assume, based on previous cases, that given poll results indicate a particular candidate will win the election, I silently assume that nothing else fundamental has changed since the "previous cases" and the analogy can be drawn. But if something has changed outside of the model, then my prediction just as well can be completely useless (even if it has nice "confidence level" derived within the model). Recent US elections … Read more...

Predictive Vision in a nutshell

I've elaborated in my previous post on why I think predictive capability is crucial for an intelligent agent and how we get fooled by getting 90% of motor commands right from a purely reactive system. This also relates to a way of thinking of the problem in terms of either statistics or dynamics. The current mainstream (statistical majority) is focused on statistics and that statistically works. However much like with guiding behavior, statistical majority may omit important outliers - important information is often hidden in the tail of the distribution.

I've mentioned the Predictive Vision Model which is our (me and a few colleagues that think alike) way to introduce predictive paradigm into machine learning. It is described in a lengthy paper, but not everyone has the time to go through it, so I will briefly describe the principles here:

Idea

The idea is to create a predictive model of the sensory input (in this case visual). Since we don't know the equations of motion of the sensory values, the way to do it is via machine learning -  simply associate values of inputs now with those same values in the future (think of something like an autoencoder … Read more...

Reactive vs Predictive AI

This post is an extension of my previous post on Statistics vs Dynamics in machine learning. I'll try to expand here on what I think is the key missing ingredient (possibly not the only one) for efforts such as a self driving car or other robotic projects that are aimed at unrestricted environments.

The way the problem of control in machine learning is approached today is by end-to-end training of motor command based on sensory input (such as e.g. here).  The authors argue that the optimisation algorithm will do a better job than explicitly breaking down the task into perceptual/planning submodules because it can do everything at once. This logic is influenced by behaviourism and the observation that humans essentially appear to do the same thing - map sensory input onto motor command.

This approach is flawed, as I will try to explain in paragraphs to follow.

What looks like direct sensory to motor mapping maybe be a lot more complex

Looking naively at a human performing a task of let's say driving a car, one may think that the human performs the task of matching what he currently sees onto a motor command. This certainly looks like that, … Read more...

Statistics and dynamics

There are two very important branches of mathematics relevant for building intelligent systems: statistics and dynamics. The rationale is the following:

  • data has regularities and patterns that repeat, therefore an intelligent system should analyse them statistically
  • things in the world are in motion and that motion has regularities, therefore the intelligent system should build models of that dynamics

Although seemingly these approaches are very compatible, it is important to understand the different modes of thinking: statistics tries to find a pattern given we know nothing else about the system (often making assumptions that things come from a known distribution) based on many samples. Dynamics tries to write down the equations of motion of the system given very few samples. Statistics wants to estimate the expected value and variance of things. Dynamics wants to predict exact value of something with strict error estimate.

Current machine learning is heavily biased towards statistics. Although some priors are inserted into the models, the general approach is to throw more data and compute power at a system and expect miracles, rather than building a system that could intelligently infer based on the dynamics (see e.g. the ImageNet and similar purely statistical approaches to understanding images … Read more...

Self driving mirage

 

Everybody is trying to build a self driving car today. Google has been testing their solution for the past ten years or so, Tesla just announced they'd be putting the "self driving hardware" onto their newly manufactured cars, Uber has a big effort with Volvo in Pittsburgh, comma.ai is trying to ship a box for outfitting certain cars with a self driving mode etc. Obviously the car manufacturers are following with Ford making announcements recently, BMW working silently and so on and so on. Some of these efforts are explicitly cautious on what they promise (driver assist technology rather than full autonomy such as e.g. Toyota), but many voices, particularly the VC's from the Bay area are hyperactive announcing how the life will be great and how the self driving car (in the sense of full autonomy) is a done deal.

Well I would not be a sceptic if I did not put all those hyper-optimistic statements to doubt. Let me go through a few claims about self driving cars one by one and put my sceptical comment next to each statement. To be frank: I'm not against the technology, I'm against the hype.

  1. Self driving cars will be
Read more...

What if we had a warp drive?

Here is something completely different. Nothing today about AI or deep learning.

I'm a big fan of Star Trek and generally like the utopian version of the future that Gene Roddenberry had given us. But obviously this is just a vision and a TV show, so it's full of stuff that makes people watch it. Inspired by that vision though, I've been day dreaming what it would be like if we actually had the 24'th century technology.

This will just be daydreaming exercise, so let us not bother for now on whether faster than light travel is feasible. Clearly with our current understanding of physics it does seem like a very fundamental limitation. But there is some new physics lurking, perhaps looking crazy, but quantum mechanics did look crazy in the beginning (and it still does) and yet has proven to be extremely good at describing nature.

Here are my assumptions:

  • faster than light travel is possible at a rate of say 1 light year per hour. For now let's just assume that the "warp" drive takes the ship into a thin wormhole like tube, so when the ship is in warp mode it cannot interact with matter and
Read more...

The Elephant in the Chinese room

There is an ancient argument in the field of AI called the Chinese room experiment. The thought experiment proposed by John Searle in the early eighties goes as follows:

  1. You put somebody who does not know Chinese in a room
  2. You give them a lengthy instruction (a program) on how to respond to given Chinese symbols
  3. Finally you run the experiment by feeding in Chinese sentences in the input and getting sentences at the output. The Chinese fellows are convinced they are running a conversation with a sentient being but the poor guy inside just shuffles symbols and has no idea what is he conversing about

The conclusion is that even though the external observers assume (by Turing test) that they are observing intelligence, the guy inside is clearly unaware of what is going on, and therefore the intelligence is somehow unreal.

Personally I have several issues with that experiment. First of all it is a thought experiment and it assumes we can have externally recognised intelligence implemented by a guy with a book of symbol transformations. Although a computational in/out relation like that should be implementable by a "computer", the size of the necessary derivations could be enormous. In … Read more...

The peculiar perception of the problem of perception

In the previous posts I've been investigating the current state of the art deep nets for casual vision application - telling what is in the image taken in an average office and average boring street. I've also played a bit with adversarial examples to show how the deep nets can be fooled. These failure modes tell us something important about the level of perception we are dealing with - very basic level. In this post I will discuss why I think perception is such an elusive problem. Let's begin with vision.

The blindspot

Each of us is born with a blindspot in their visual field - the place where nerve fibres from the retina exit the eyeball. However, unless somebody tells us how to discover it, we are completely ignorant of its existence. In some sense it could be qualified as an example of anosognosia - a condition in which humans are not aware of a defect in their perception. A more extreme case of this is known as Anton-Babinski syndrome, typically occurring after a brain damage in which the patient claims to see even though he is technically blind! As much as this seems unbelievable, patients will confabulate … Read more...

Adversarial red flag

In the previous post I  applied an off the shelf deep net to get an idea how it performs on average street/office video. The purpose of this exercise was to critically examine and reveal what these award winning models are actually like. The results were a mixed bag. The network was able to capture the gist of the scene, but made serious mistakes every once in a while. Granted the model I used for that experiment was trained on ImageNet which has a few biases and is probably not the best set to test "visual capabilities in the real world". In the the current post I will discuss another problem which is plaguing deep learning models - adversarial stimuli.

Deep nets can be made to fail on purpose. It's been first shown in [1] and there have been quite a few papers since then with different methods to construct stimuli that fool deep models. In the simplest case one can directly derive these stimuli from the network itself. Since ConvNets  are purely feedforward systems (most of them at least), we can trace back the gradients. Typically gradients are used to modify the weights such that they better fit the given … Read more...

Just how close are we to solving vision?

There is a lot of hype today about deep learning, a class of multilayer perceptrons with some 5-20 layers featuring convolutional and polling layers. Many blogs [1,2,3] discuss the structure of these networks, there is plenty code published so I won't get into much detail here. Several tech companies had invested a lot of money into this research and everyone has very high expectations on performance of these models. Indeed they've been winning image classification competitions for several years now and media are reporting  superhuman performance on some visual classification tasks once in a while.

Now just looking at the numbers from ImageNet competition is not really telling us much on how good these models really are, we can only maybe confirm that they are much better than whatever came before them (for that benchmark at least). With media reporting superhuman abilities and high ImageNet numbers and big CEO's pumping hype and showing sexy movies of a car tracking other cars on the road (2min video looped X times which seems a bit suspicious) one can get the impression that vision is  a solved problem.

In this blog post (and a few others coming … Read more...

Intelligence is real

So we are trying to build Artificial Intelligence. But what is it? Is a program playing chess or go intelligent? After some though I think most people would agree that not really. It's just a computer program that managed to master a game. Is a large neural network -- optimised with gradient descent to approximate a dataset -- intelligent? Well, it is just a function approximator so technically I would say no. All these exercises do capture some aspect of what we would call intelligence, but the core of this idea seems elusive.

So why all the fuss about Artificial Intelligence?

A bit of history

The term "Artificial Intelligence" was coined by Prof. John McCarthy for the famous Dartmouth Conference in 1956. By his own words he had to invent something to get the funding. Since the very origin this term caused controversies and boom-bust iterations known as AI winters, among which  the better documented ones are the LightHill report in 1974, Minsky and Papert book Perceptrons in 1969 (which busted the connectionist studies for quite a while), the 1987 collapse of expert systems (predicted by Minsky and Schank), and more recent smaller crisis in Backpropagation powered neural networks … Read more...

When you are 80% there means you are not there

Apparently we live in the world where singularity is about to happen and artificial intelligence (AI) will cover every aspect of our lives. But the field of AI had always been inflated by bubbles and busts known as AI winters. Why is it so and is this time different?

Human psychology

There are several weaknesses of human psychology that make us very susceptible to hype in AI. First of all, we should note that humans have amazing perception, particularly visual perception. The problem is that great majority of our marvellous vision develops by the age of 2 and so neither of us remember what it's like to not perceive the world correctly. By the time we begin to verbalise (and remember anything), all the low and mid level perceptual machinery is up and running. So our psyche wakes up in a world where everything already makes sense and what needs to be learned and achieved are the higher cognitive tasks.

This phenomenon is reflected in our approach to AI. We tend to believe that artificial intelligence is about playing chess or go (or atari) because that is the kind of higher cognitive task that we are excited about by the … Read more...

PVM is out

So finally after many months we can share our progress. Predictive Vision Model (PVM) is a new recurrent learning architecture we've been exploring for a while now. The paper showing initial results is available here https://arxiv.org/abs/1607.06854 and the corresponding code is https://github.com/braincorp/PVM .

So what is PVM? It is a new approach to learning foundations of perception in an unsupervised way. We exploit the idea of multi-scale and multi-level stacked predictive encoders (similar to autoencoder but tries to predict the next frame in a sequence of inputs). We then find, that if we train this architecture online, we can liberally wire it with feedback and lateral connectivity and nothing breaks! So we end up with a scalable, unsupervised architecture that naturally operates in time and is able to exploit all the regularities, which are so obvious to us - humans  highly visual animals - that we don't even notice them consciously until we are faced with an optical illusion.

This is really just the beginning of the work. We experimented a lot, therefore we decided not to invest into a GPU implementations, but now this certainly is a good avenue to pursue. Recurrent feedback and online operation make it difficult … Read more...