There is a widespread belief among the "artificial intelligentsia" that with the advent of deep learning all it takes to conquer some new land (application) is to create a relevant dataset, get a fast GPU and train, train, train. However, as it happens with complex matters, this approach has certain limitations and hidden assumptions. There are at least two important epistemological assumptions:
- Given big enough sample from some distribution we can approximate it efficiently with a statistical/connectionist model
- A statistical sample of a phenomenon is enough to automate/reason/predict the phenomenon
Both of these assumptions are not universally correct.
Universal approximation is not really universal
There is a theoretical result known as the universal approximation theorem. In summary it states that any function can be approximated to an arbitrary precision by (at least) three level composition of real functions, such as e.g. a multilayer perceptron with sigmoidal activation. This is a mathematical statement, but a rather existential one. It does not say if such approximation would be practical or achievable with, say, gradient descent approach. It merely states that such approximation exists. As with many such existential arguments, their applicability to real world is limited. In the real world, we … Read more...
In my previous post I described the hardware components of my self made time capsule/home server. It consisted of the Intel NUC micro-PC, Netgear managed 1GBps switch and Edimax 802.1ac access point. Here I'll go over the basic config, necessary to achieve the functionality I've mentioned.
I'm using ubuntu 16.04 LTS (Long Term Support). It is a very decent Debian based distribution and works very well on the Intel NUC. In this post I'll assume that the Linux is already installed and all the hardware components are detected by the kernel (I had no issues whatsoever, it worked out of the box). The only issue that may perhaps be a problem on the NUC is when you have secure boot enabled in the BIOS, which should be disabled before you install Linux. Also make sure the boot sequence in the BIOS makes sense. After you install linux, it's a good idea to update to make sure all the installed packages are the latest.
Before we begin the setup it is good to install a few essentials before we screw up our Internet connection:
apt-get install openssh-server
apt-get install git
apt-get install vim
apt-get install dnsmasq
apt-get install vlan
… Read more...
It will not be about AI this time, neither will it be about Sci-fi. It will actually be exactly about what the title indicates. So let's begin.
Since a certain incident in the late 90's involving a 850MB drive I'm quite paranoid about having backup. For many years this paranoia was satisfied with Apple Time Capsule - a handy device that acts as a wifi/router and a network attached storage, which through afp protocol offers time machine service to Mac computers. I have one back in Poland and I had one here in California, until one day in January 2017 all of a sudden the device died. I had this device since 2010 so it served me well for quite a few years (I upgraded the drive to a 3TB in the meanwhile), but still the death was surprising and disappointing.
But what was even more disappointing, was to see the current Apple's offering in that segment. As mentioned I bough my (back then 1.5TB) capsule in 2010, now it is 2017 and Apple offers... a 2TB Capsule for $299 and a 3TB Capsule for $399. This is ridiculous!
Ultimately, I decided to build one myself, and I'm very happy … Read more...
I've mentioned several times that the Predictive Vision Model (PVM) is not expressible in any of the current deep learning frameworks such as TensorFlow or Caffe (not easily or for that matter efficiently at least). This is due to the inherent feedback and multi scale structure. PVM is not an end to end trained system, it is a collection of intertwined sequence learners. That being said, I'm currently working in my free time to bring PVM to the GPU.
I'm not the most experienced person in the GPU programming domain, but I can definitely write a kernel and use Nvidia profiler. So far my results look very encouraging: I can train more than 210 million float32 parameters at 21 fps with Nvidia Titan X based on the Pascal architecture. In other words that is 4.4 billion trained float32 parameters per second. This training performance matches that of deep learning models, where e.g. RESNET-50 with ~25 million parameters can be trained at approximately 100-150 samples/s (single GPU). In fact my GPU utilisation is now close to 97% with most kernels. To some degree I feel that PVM will be even better suited for GPU implementation than end-to-end deep learning because of … Read more...
[Note: this post gets updated every once in a while with new pictures
This is mostly a fun post, though I hope it may trigger some thinking. Since there is much hype about self driving cars, I decided to express myself artistically and draw a bunch of situations that the current technology most likely would not be able to deal with. Some of them might be funny, some might be dangerous. Answer yourself if you want to be driven by an entity which cannot understand these situations and if you are developing a self driving car, let me know in the comments if such situations could indeed be misinterpreted. I'll start with just a few, it takes time to draw them and I'm not a very good artist. Email me if you have ideas for additional situations.
Artwork 1: Open manhole. Unlike with a regular pothole, driving into open manhole can lead to a disaster. Would a driverless car figure that out? If you consider an open manhole without construction cones an unlikely possibility, take a look at this video.
Artwork 2: Stop sign prank. Somebody (bored teenagers?) put tens of stop … Read more...
This post could be considered a continuation of my previous post, "AI and the ludic fallacy." Bear with me, as this post will make some important yet philosophical points. There are many people who cringe when they hear the p-word – I have been known to be such a person myself. As with many fields out there, there is a lot of low quality material (namely BS) in philosophy. However, I have also seen many incredibly insightful philosophy pieces, and it seems to be a helpful discipline whenever one needs to get out of the "box".
This post is about one such box.
An Example Box (as seen from another Box)
A brief digression from AI to illustrate where we're going. Many successful disciplines such as math and logic operate within a carefully designed box — a set of axioms from which truths are derived. Once a decent set of axioms is established, very elegant and exciting theories can be built. Some of these theories may be successful in modeling physical reality: classical math built on set theory is so successful that after working within it, one may be tempted to think that reality is the set theory … Read more...
Caution: due to a large number of animations this post may take a while to load (depending on your connection speed), please be patient and don't reload unless necessary. The animations will likely load before you read the text.
Scalability in Machine Learning
Scalability is a word with many meanings and can be confusing, particularly when applied to machine learning. For me the meaning of scalability is the answer to this question:
Can an instance of the algorithm be practically scaled to larger/parallel hardware and achieve better results in approximately the same (physical) time?
That is different from the typical understanding of data parallelism, in which case multiple instances of an algorithm are deployed in parallel to process chunks of data simultaneously. An example of scalability of instance (definition above) is for example computational fluid dynamics (CFD). Aside from the need to obtain better initial conditions, one can run the fluid dynamics on a finer grid and achieve better (more accurate) results. Obviously it requires more compute, but generally the increase in complexity can be offset by adding more processors (there are some subtleties related to Amdahl's law and synchronisation). For that reason, most of the world's giant supercomputers are … Read more...
Caution: due to a large number of animations, fair amount of traffic and the tiny size of my web hosting machine, this post may take a while to load, please be patient and don't reload unless necessary.
There has recently been a fair amount of deep learning work on video prediction and generative models that focuses on infusing motion into static pictures. One such paper is e.g. available here:
The approach taken in that paper was to train a model on a huge amount of data and explicitly separate the task of prediction into (1) the generation of static background and (2) a moving object. As much as this work is impressive, the separation into background and foreground prediction seems a bit unnatural. Given however the nice mesmerising quality of video (and the importance of prediction) I decided to play a little bit with our Predictive Vision Model (PVM) which is also capable of generating such "dreams". For the sake of this post I only trained a very small instance of PVM on a single relatively short video, so the results shown here are mainly illustrative and this is by no means a full blown scientific study.… Read more...
In many of my posts I'm directly or indirectly postulating learning of the physics as a way to create a "real AI". The point I'm trying to makes is so obvious, that it actually is not obvious at all and it took me some time to realise it. As with many such obvious/non-obvious things it takes multiple angles before the essence could be captured, hence why I write this blog. I'm trying to express myself in many ways until I hit the explanation that everyone just simply gets. So let me try again in this post:
The world around us is complex. Everything to some degree interacts with everything else, there are lots of regularities but there is also a fair amount of chaos. No two trees look identical, yet we manage to categorise them. In terms of physical language it appears that a good chunk of our reality is a "mixing system" at the "edge of chaos" (or otherwise critical). We therefore cannot very well predict what will happen. Yet I'm postulating prediction as a training paradigm, does this make any sense?
It does and here is why: even with the chaotic world, there are numerous aspects of … Read more...
Recently Tesla had shown a teaser video of their "self driving car" project which immediately drew media attention and swarms of self driving "enthusiasts" to again announce that this is a done deal already (which it is not). Here is the video in question:
Note: above video has been subsequently taken down, I'm now linking to a mirror.
Now this looks very impressive as a demo but there are a few details I'd like to point out before we start saying again that the self driving car is a done deal from technological point of view. Disclaimer: I do like Tesla and I think some of their ideas are great, but their self driving seems a bit premature, somewhat over promised and over hyped.
- The lighting conditions in a video are perfect from computer vision point of view. Although it is a bit foggy, the illumination is uniform and diffused. There are no hard shadows, flares or ghosts.
- The lane markings are all clearly painted and visible everywhere.
- There are no "unusual situations" (see below what I mean by that).
Just a reminder that a self driving car was demoed as a research project in mid 80's at CMU … Read more...
Yann LeCun, the inventor of the convolutional networks has given a talk at CMU robotics institute which was conveniently recorded and made available to the general public here:
Although the talk is over 1h long, it is certainly worth watching and I strongly recommend doing that before you read any of the following text.
After the lecture
Yann LeCun is a rather colourful character and certainly has strong opinions on many subjects. I find myself at any given time either strongly agreeing or strongly disagreeing with him and it's no surprise it is the same this time around. Anyway, he makes several points in his talk which I think are relevant to our published work on PVM (PVM paper for details) and worth more detailed comment.
- After a brief overview of the state of the art in machine learning and AI, LeCun goes on to talk about more cutting edge stuff. He notes that the next important frontier for AI is learning "Forward Models" via prediction, learning "folk's physics" so to speak (a.k.a, common sense). He presents the observation that reinforcement learning has a very weak learning signal in the case of sparse rewards,
… Read more...
Given the overwhelming amount of excitement (and inevitable medial noise) I decided to make a concise summary of the state of technology (as of late 2016) in order to not go insane.
What is AI?
Artificial intelligence is a misleading buzzword and these days it is used for anything having to do with automation via computing. Generally it applies to a set of optimisation methods loosely connected with outdated theories on how the brain works.
AI must be close to being solved since recent progress shows that technological singularity is inevitable and close?
Singularity may or may not happen. As with any reasoning that extrapolates certain trends, there could be barriers that prevent these prophecies from ever materialising. If we were to extrapolate the distance travelled by humans in space between late 1940's to early 1970' and fit it with an exponential curve, we would have had to have sent astronauts to Jupiter by now. Clearly did not happen. Same with Moore's law and progress in computing. Although there was a period when computing power would double every 20 months or so, it is not clear if this still applies (comparing contemporary computers with those from say 10 years … Read more...
I enjoyed Nassim Nicolas Taleb books and like his style of calling out some of the - let's put it mildly - misconceptions in theoretical approach to economy. One of his key ideas is that of Ludic Fallacy, that is the use (abuse) of game analogies to real world situations. This fallacy stems from the fact, that since the reality is incomprehensibly complex we typically restrict the scope of research (or any other mental activity) to some model world - game - where the rules are all known (assumed). We then derive conclusions about some aspect of reality, forgetting that the conclusions were derived in the model world and the uncertainties as to whether that model world was accurate are inherited by those conclusions. For example: if I assume, based on previous cases, that given poll results indicate a particular candidate will win the election, I silently assume that nothing else fundamental has changed since the "previous cases" and the analogy can be drawn. But if something has changed outside of the model, then my prediction just as well can be completely useless (even if it has nice "confidence level" derived within the model). Recent US elections … Read more...
I've elaborated in my previous post on why I think predictive capability is crucial for an intelligent agent and how we get fooled by getting 90% of motor commands right from a purely reactive system. This also relates to a way of thinking of the problem in terms of either statistics or dynamics. The current mainstream (statistical majority) is focused on statistics and that statistically works. However much like with guiding behavior, statistical majority may omit important outliers - important information is often hidden in the tail of the distribution.
I've mentioned the Predictive Vision Model which is our (me and a few colleagues that think alike) way to introduce predictive paradigm into machine learning. It is described in a lengthy paper, but not everyone has the time to go through it, so I will briefly describe the principles here:
The idea is to create a predictive model of the sensory input (in this case visual). Since we don't know the equations of motion of the sensory values, the way to do it is via machine learning - simply associate values of inputs now with those same values in the future (think of something like an autoencoder … Read more...