Singularity missed

Every now and then in the discussions of AI/AGI and what not comes the central figure of that entire intellectual movement - Ray Kurzweil. And with him inevitably comes a form of an exponential chart like the one below: 

Basically the curve depicts Moore's law (which is not disputable), with a few additional labels suggesting that a particular computing performance is somehow equivalent to a processing power of brains of various animals. 

Superficially this looks fine, but of course the problem is hidden in how do we arrive with these equivalences? The typical answer to this question is that perhaps the labels should move around on the curve left or right but the sit there somewhere so it's fine, we might be off by a year or two, who cares. The relevance of whether or not it even makes sense to put brains along with computers on that chart typically isn't even questioned. 

Since I want to keep this post short, let's cut straight to the conclusion - this chart alone shows we are off by at least 23 years from original predictions. Why?

Let's take a closer look: Kurzweil claims we should be seeing insect brain capability in $1000 … Read more...

Fat tails are weird

If you have taken a statistics class it may have included stuff like basic measure theory. Lebesgue measures and integrals and their relations to other means of integration. If your course was math heavy (like mine was) it may have included Carathéodory's extension theorem and even basics of operator theory on Hilbert spaces, Fourier transforms etc. Most of this mathematical tooling would be devoted to a proof of one of the most important theorems on which most of statistics is based - central limit theorem (CLT)

Central limit theorem states that for a broad class of what we in math call random variables (which represent realizations of some experiment which includes randomness), as long as they satisfy certain seemingly basic conditions, their average converges to a random variable of a particular type, one we call normal, or Gaussian. 

The two conditions that these variables need to satisfy are that they are:

  1. Independent
  2. Have finite variance

In human language this means that individual random measurements (experiments) "don't know" anything about each other, and that each one of these measurements "most of the time" sits within a bounded range of values, as in it can actually be pretty much always … Read more...

The Church of AGI

As I child I've been raised as a catholic and I vividly remember what it was like to believe in God and all the divine entities. It does bring certain amount of comfort to our lives, removes loneliness, gives a broader sense to existence. At some point however I started questioning the things I was told and eventually became more of a deist and at this point pretty much an atheist. I don't claim to know the answer to life universe and everything (except that it is 42!), in fact although I much prefer rational and objectivist approach to reality, I believe science as we know is still barely scratching the surface of the secrets of reality and in fact I would not exclude the possibility that those secrets are fundamentally unknowable. I actually think it is totally fine to admit that we live our lives in a world of uncertainty, with a plethora of events and processes around us we only pretend to understand. Moreover, even while being generally an atheist I'm willing to admit that lots of stories and rules originating in religious texts have some level of universality, especially if some of them survived for thousands of … Read more...

The Atom of Intelligence

Back in a very distant past, perhaps over 2 billion years ago, a wonderful thing happened: a strand of nucleic acid found itself encapsulated in a little protein bubble, along with a few other ingredients sufficient for it to replicate. This in fact may have happened millions of times before, each time dying out after a few generations. But one such bubble that appeared that day in the primordial sea was going to survive, this was the one which was going to make it and launch an incredible evolutionary process lasting until this day. A process that managed to create incredibly complex beings including you and me. 


As soon as this bubble of life started to replicate, the process "guiding" its evolution "noticed" that there are effectively two aspects of control necessary for survival:

  • Internal control regulating the expression of genetic code and other internal reactions - we call this process metabolism today.
  • External control regulating interaction of the bubble with the surrounding environment - we call the process behavior today.

Initially both control mechanisms were likely only tuned at a generational level, not so much at the level of life of an individual, but clearly an evolutionary pressure to … Read more...

Ai Reflections

Statisticians like to insist that correlation should not be confused with causation. Most of us intuitively understand this actually not a very subtle difference. We know that correlation is in many ways weaker than causal relationship. A causal relationship invokes some mechanics, some process by which one process influences another. A mere correlation simply means that two processes just happened to exhibit some relationship, perhaps by chance, perhaps influenced by yet another unobserved process, perhaps by an entire chain of unobserved and seemingly unrelated processes. 

When we rely on correlation, we can have models that are very often correct in their predictions, but they might be correct for all the wrong reasons. This distinction between weak, statistical relationship and a lot stronger, mechanistic, direct, dynamical, causal relationship is really at the core of what in my mind is the fatal weakness in contemporary approach in AI. 

The argument

Let me role play, what I think is a distilled version of a dialog between an AI enthusiast and a skeptic like myself: 

AI enthusiast: Look at all these wonderful things we can do now using deep learning. We can recognize images, generate images, generate reasonable answers to questions, this is Read more...

Farcical Self-Delusion

It's time for another post in the Tesla FSD series, which is a part of a general self driving car debacle discussed in this blog since 2016 [1,2,3,4,5,6,7]. In summary, the thesis of this blog is that AI hasn't reached the necessary understanding of physical reality to become truly autonomous and hence the contemporary AI contraptions cannot be trusted with important decisions such as those risking human life in cars. In various posts I go into detail of why I think that is the case [1,2,3,4,5,6] and in others I propose some approaches to get out if this pickle [1,2,3]. In short my claim is that our current AI approach is at the core statistical and effectively "short tailed" in nature, i.e. the core assumption of our models is that there exist distributions representing certain semantical categories of the world and that those distributions are compact and can be efficiently approximated with a set of rather "static" models. I claim this assumption is wrong at the foundation, the semantic distributions, … Read more...

Brain computer confusion

There is a never ending discussion, which very concisely can be summarized in this tweet below:

And frankly any time I see similar exchanges (and I see a lot of them) I get mildly irritated. Let me get to the essence. 

Computer analogy

Computers have been undoubtably the shaping invention of the recent century and hence they have became a strong theme in our culture. Since the theory on which computers have been built is a branch of mathematics, by definition an abstract discipline, computers have also had a major impact on philosophy. We learned for example that everything we can write an equation for can be in principle calculated on a computer. This leads to somewhat profound philosophical consequences summarized as follows:

  1. Stuff we can write equations for is in principle computable
  2. We can write equations for physical interactions of molecules
  3. Everything is made of molecules
  4. Hence everything is computable
  5. Hence in principle we could simulate an entire brain in a computer
  6. And since we can in principle simulate a Turing machine in a brain, hence brains and computers have to be equivalent
  7. Furthermore, in principle we could simulate entire Universe
  8. Hence universe must be a computer too 

When … Read more...

Ai mid 2021. Self driving car meets reality.

The pandemic has largely overwhelmed the news cycle over the past year and hence influencing and largely deflating the AI hype train. There were a few developments though which I'd consider significant. Some of them very well predicted by articles in this blog, and some surprising. Let's jump right in. 

Million Robotaxis in wonderland

Since it is 2021 after all, the most immediate AI flop is related to Tesla robotaxis or rather lack thereof. Elon Musk promised that Tesla would achieve L5 autonomy by the end of 2020 back in April 2019 when he needed to raise money [and reiterated in April 2020]. The famous autonomy day was pumping hype and showing limited demos available to some guests of the show. These demonstration rides were no different from the demo shown in 2016 (as it later turned out, recorded eventually after many failed attempts). In fact Elon Musk claimed in 2016 that self driving problem is essentially solved, here a quote from this interview :

This was later followed by various promises of autonomous coast to coast drive by the end of 2017, later pushed and eventually canceled altogether. To be fair, Musk wasn't the only silicon valley … Read more...

AI - the no bullshit approach

Intro

Since many of my posts were mostly critical and arguably somewhat cynical [1], [2], [3], at least over the last 2-3 years, I decided to switch gears a little and let my audience know I'm actually a very constructive, busy building stuff most of the time, while my ranting on the blog is mostly a side project to vent, since above everything I'm allergic to naive hype and nonsense. 

Nevertheless I've worked in the so called AI/robotics/perception for at least ten years in industry now (and prior to that having done a Phd and a rather satisfying academic career), I've had a slightly different path than many working in the same area and hence have a slightly unusual point of view. For those who never bothered to read the bio, I was excited about connectionism way before it was cool, got slightly bored by it and got drawn into more bio-realistic/neuroscience based models, worked on that a few years and got disillusioned, then worked on robotics for a few years, got disillusioned by that, went on and got a DARPA grant to build the predictive vision model (which summarized all I learned about … Read more...

DeflAition

I've started contemplating this post in mid February 2020 while driving back from Phoenix to San Diego, a few miles after passing Yuma, while staring into the sunset over San Diego mountains on the horizon hundred miles ahead. Since then the world had changed. And by the time you'll read this post a week from now (early April 2020) world will have changed again. And by the summer of 2020 it will have changed several times over. I won't go here too much into the COVID-19 situation, since I'm not a biologist, my personal opinion though it that it is a real deal, it's a dangerous disease spreading like wildfire, something we have not really seen since the Spanish Flu of 1918. And since our supply chains are a lot more fragile, our lifestyles a lot more lavish and everybody is levered up, it has all the chances of causing an economic havoc unlike anything we've seen in the past two centuries. With that out of the way, let's move to AI, as certainly the economic downturn will have a huge impact there.

(not)OpenAI

Let's start with the article in technology review that came out in February going deeper inside … Read more...

AI update, late 2019 - wizards of Oz

 

It's been 7 months since my last commentary on the field, and as it became  regular appearance in this blog (and in fact many people apparently enjoy this form and keep asking for it), it is a time for another one.  For those new to the blog, here we generally strip the AI news coverage out of fluff and try to get to the substance, often with a fair dose of sarcasm and cynicism. The more pompous and grandiose the PR statement, the more sarcasm and cynicism - just to provide some balance in nature. The field of AI never fails to deliver on pompous and grandiose fake news hence I predict there will be a material for this blog for many years to come. Now that the introductory stuff is behind and you've been warned, let us go straight to what happened in the field since May 2019. 

Self driving cars

As time goes, more and more cracks are showing on the self driving car narrative. In June, one of the prominent startups in the competition - Drive.ai got acqui-hired by Apple, reportedly days before it would have ran out of cash. For those not … Read more...