Ai mid 2021. Self driving car meets reality.

The pandemic has largely overwhelmed the news cycle over the past year and hence influencing and largely deflating the AI hype train. There were a few developments though which I'd consider significant. Some of them very well predicted by articles in this blog, and some surprising. Let's jump right in. 

Million Robotaxis in wonderland

Since it is 2021 after all, the most immediate AI flop is related to Tesla robotaxis or rather lack thereof. Elon Musk promised that Tesla would achieve L5 autonomy by the end of 2020 back in April 2019 when he needed to raise money [and reiterated in April 2020]. The famous autonomy day was pumping hype and showing limited demos available to some guests of the show. These demonstration rides were no different from the demo shown in 2016 (as it later turned out, recorded eventually after many failed attempts). In fact Elon Musk claimed in 2016 that self driving problem is essentially solved, here a quote from this interview :

This was later followed by various promises of autonomous coast to coast drive by the end of 2017, later pushed and eventually canceled altogether. To be fair, Musk wasn't the only silicon valley big fish blinded by this self driving tunnel vision, back in 2013 Marc Andreessen - a prominent bay area VC in a discussion with Peter Thiel said the following: 

Self driving cars are very close. Google basically has them working. Mercedes actually almost has them working. Mercedes their new top of the line sedan coming up this summer, they have almost, they had a very large amount of self driving technology in it, they yanked it the last minute cause they didn't think laws at the state level were ready yet to handle self driving cars.

Here we are in 2021 (8 years after the words above were said) and self driving cars are still a curiosity limited to small geofenced deployments in areas with great infrastructure and great weather. The state of Tesla Full Self Driving was recently pretty well summarized by a few folks with early access to the software in a video:

Littered with numerous disengagements and dangerous situations in which the car didn't really know what to do, in pretty much otherwise perfect driving conditions without adverse weather and with good infrastructure. It should be clear to everybody that this is not even close to being close to being ready for autonomy. Unfortunately an increasing number of people had to pay the ultimate price for believing in this FSD delusion [1][2][3][4]. Fortunately just recently it looks like this finally came into the attention of traffic cops, as the first "tesla backseat driver" got arrested.

So how is this possible? How come the tycoons of Silicon Valley were so certain this technology is within hands reach? Of course the tycoons were followed by an endless crowd of tech bros, and latest generation of deep learning kids in a self reinforcing echo chamber feedback loop. How come that crowd of seemingly super intelligent geeks could have been so wrong?

In reality it was always a giant delusion and a bluff fueled by incorrect understanding of the developments in AI and large underestimation of the difficulty of the problem. The crowd mostly still believes in it, but that belief is under a significant stress already and cracking all over the place. Since there were so many who deeply believed this story [there are apparently people who bought fleets of Teslas on a margin to run a small taxi company once the autonomy were to become available] and such a river of money went into this, it will be a hard and painful collision with reality for many. At least most of these tech bros will merely loose a bunch of money, not their lives like the unfortunate autopilot victims.

To add a bit of spice to this entire debacle, turns out that while Tesla is deceptively advertising "full self driving" (which BTW in small print is actually "full self driving capability" which really isn't full self driving at all), it confesses to California DMV that whatever they are selling/testing right now is a mere SAE level 2 driver assist. In a recent series of official mail exchanges revealed as part of FOIA requests by PlainSite we read:

and more recently in a yet another revealed email exchange a Tesla engineer CJ Moore is quoted saying:

All of this indicates that internally the company is completely aware of their inadequate state of technology (that is the impression one gets when watching talks by Andrej Karpathy too) and all this evidence puts Elon Musk in a rather awkward position, because if all these emails are correct (and there is no reason to believe otherwise) then his capital raise as part of 2019 autonomy day was Theranos level fraudulent.  

A note about LIDAR.

There is a vibrant discussion going on in the tech world as to whether LIDAR is a thing necessary for the self driving car with proponents arguing it is a must and Elon Musk and Tesla crowd claiming it's a crutch. I have a rather subtle opinion about it which rarely comes across correctly in short twitter exchanges, so I'll try to make this point here. Let's start with a fact that humans indeed can drive only with "camera like" sensors - eyes. Eyes are in some ways worse than modern camera, probably with lower overall resolution (beyond fovea human vision is actually really blurry), but in aspects such as dynamic range and variable illumination handling eyes are still way superior to electronic cams (not to mention being actively articulated to minimize obstructions). But the most important part of eyes is the brain they are attached to (and in fact a part of). We have extraordinary ability to build coherent spatial models of the environment we are in and ability to understand and predict various aspects of that environment. That capability is currently lacking in AI. We have rudimentary ways of taking camera images and inferring the 3d model of the world outside but it is rather noisy and fragile. And here comes the LIDAR where we can get the rough 3d structure of the scene without ambiguity. And in that way LIDAR absolutely helps and if Teslas had it onboard they'd be MUCH safer. But even though with a LIDAR we have a rather good 3d model of the environment, the AI systems onboard of these cars still don't understand much about it. Those systems still don't get who exactly is the active actor in the scene, what things may move and which are fixed, what are the intentions of actors, what is the dynamics behind various objects, what are the causal relations etc. Not to mention more exotic situations, where e.g. social context is necessary to understand what is going on. So yes, LIDAR certainly helps to create a robust 3d scene representation (Waymos of the world are right), and no, LIDAR is not the ultimate answer and in fact not even the necessary bit (Musks of the world are right in that too). That said, nobody in the AI world has the slightest clue how to build systems as robust in scene understanding as the human brain, and this blog is all about that and ideas on how to make that happen. 

Krafcik self driving away

While we are on it, Waymo had recently also faced some rough developments, the CEO John Krafcik suddenly left the company, leaving not one but two executives sharing the role of a CEO. Clearly two CEOs is not a dream executive structure and the inability of establishing a single leader to inherit the company indicates a rather serious problem. Krafcik sent a letter to Waymo employees explaining his decision and somebody on twitter or reddit took the liberty to translate his corporate newspeak to english:

Although fairly cynical, I think this "translation" captures the color of the situation rather well. Waymo, although technically operating a small fleet of self driving cars is in no position to scale these deployments beyond a few "joy-ride" geofenced suburbs. These cars are surely safer to drive than Teslas with beta FSD, but they still require way too much attention, way to precise mapping data, good weather, perfect and clean sensors, tons of maintenance. In other words this tech is extremely fragile and hence not ready for prime time. Update: Literally on the day this was posted a following video of Waymo "incident" was published. If there was any doubt about how fragile Waymo technology is, please watch this video carefully. 

Uber and Lyft drivers are here to stay...

In my previous post I went over the Uber deal with Aurora in which they essentially payed Aurora to take over what was left of the Uber Autonomous car unit. Turns out recently Lyft decided to follow suit and dumped their self driving unit to Toyota. I'm pretty sure Toyota will extract any bits of technology available in this unit to increase the passive safety of their cars, much like they were doing it before in the spirit of "guardian angel" as proposed back in 2016 by Gill Pratt. In either case now that it is established beyond any uncertainty that Tesla FSD is a pipe dream, Uber and Lyft drivers are here to stay for the foreseeable future. 

Voyage went on a cruise. 

Voyage was acquired by Cruise to great fanfares in what would seem like a successful startup exit. The details of the deal were not revealed and neither was the valuation. However I heard rumors that the acquisition price in this case was somewhat below the total amount raised from investors, consequently common shareholder equity in such a deal will be wiped out, i.e. such transaction is better described as a "liquidation" rather than an "exit". This is still not confirmed and I'm waiting for some official filing from Cruise to get a final confirmation of this. If these rumors were to be confirmed it would be another high profile liquidation recently after Element AI

Self deflating balloon

All in all, the autonomous vehicle bubble is loosing air at a progressively faster pace [1]. I wouldn't perhaps call it a total bust yet, but that is coming down the road too. I was originally predicting [1],[2],[3], that the abrupt end to self driving car dreams will be the nail to the coffin of the current wave of AI hype relying on Deep Learning. It certainly have taken longer than I anticipated but I'm still pretty convinced it is going to be the case. The deep learning hype is now deflating pretty strongly too, with even the biggest "enthusiasts" now admitting that things ain't as rosy as they were hoping. 

Andrew Ng finally X-rayed the AI hype.

My regular readers will undoubtedly recall that I've often picked up on gems from Andrew Ng as he is one of the prime AI hype blowers of all time. Let's recall that in late 2017 Ng was pretty much calling radiologists an extinct profession blowing his trumpet about an AI system that seemingly was able to detect pneumonia in X-ray better than humans. 

Fast forward to 2021 and in a stunning revelation recently Ng admitted this:

What a 180 degrees turn... Apparently the skeptics pointing out various flaws to the AI approaches to clinical data may have been right after all. One thing left for me here is to quote a classic XKCD:

And while it turns out AI ain't ready to replace radiologists, pigeons just might do it

Deep learning turns out shallow again

There is somewhat less flashy headlines about deep learning these days, most notably because everything is overwhelmed with pandemic, but every once in a while an interesting nugget pops out. In this paper for example we learn that deep nets are essentially unable to learn same-different relations, beyond the cases when the examples resemble the training set statistics at a low pixel level. This isn't very surprising since we've learned about the adversarial examples it should be clear that neocognitron type architectures rely on a completely different set of features than humans to make their classification. There are numerous other visual tasks that deep learning can't really handle at all, which I've listed in various posts before [1][2], but this one seems rather fundamental. 

The real change is coming in quietly.  

Not everything about AI sucks, and I think we are about see some serious changes in the coming years. The key is not to blindly believe that deep learning will solve everything, with just more data and compute, but to soberly estimate what can this tech do and what kinds of real practical problems can be solved with those capabilities. We are not going to get rid of the driver profession anytime soon, but I think we have a high chance of getting rid of the cashier profession [and perhaps a number of other but related professions such as warehouse clerk]. This isn't solely the result of AI, but also of the amazing progress of semiconductor technology and ability to create high resolution amazing cameras at a ridiculously cheap prices. We are working here on this in AccelRobotics and we have some news incoming about various pilot deployments coming up (we've been a bit delayed by the pandemic but a few projects are finally coming to fruition). Much like I mentioned previously, this stuff is not ridden with liability problems of driving a car and has a completely different risk/reward profile hence this stuff is real. Here is the first of our incoming announcements:

Hopefully soon I'll be able to share more details in this blog about our technology and neat solutions. 


If you found an error, highlight it and press Shift + Enter or click here to inform us.