AI update, late 2019 - wizards of Oz

 

It's been 7 months since my last commentary on the field, and as it became  regular appearance in this blog (and in fact many people apparently enjoy this form and keep asking for it), it is a time for another one.  For those new to the blog, here we generally strip the AI news coverage out of fluff and try to get to the substance, often with a fair dose of sarcasm and cynicism. The more pompous and grandiose the PR statement, the more sarcasm and cynicism - just to provide some balance in nature. The field of AI never fails to deliver on pompous and grandiose fake news hence I predict there will be a material for this blog for many years to come. Now that the introductory stuff is behind and you've been warned, let us go straight to what happened in the field since May 2019. 

Self driving cars

As time goes, more and more cracks are showing on the self driving car narrative. In June, one of the prominent startups in the competition - Drive.ai got acqui-hired by Apple, reportedly days before it would have ran out of cash. For those not well versed in startup valuation, this is not the best imaginable outcome. Startup employees are typically getting stock options (an option is basically a contract that allows one to buy a given number of shares at a fixed price), often trading off better salary they'd earn in an established company. These options are typically for common stock - in ways the least secured part of the equity structure. Investors buy shares of a company in investment rounds and often get at least in part what is called a preferred stock and other forms of liquidation preference. This means that in the event of liquidation, those shares will need to get payed off first, before any of the outstanding common shares. In any case the best outcome for a startup is an IPO (Initial Public Offering) when the shares get registered and can be traded freely at an open market, or an acquisition at a highest possible valuation. At high valuations the preferred stock effectively becomes same with common stock and common stock holders and option holders can cash out. However when the valuation at the acquisition is low, there may not be enough to cover the preferred stock (or any outstanding debt or convertible notes) in which case the common stock holders end up with nothing (and option holders can even end up negative if they exercised them). Anyway, long story short, this seems to be the case at drive.ai since they've been showing signs of financial distress earlier this year. As a curiosity, one of Drive.ai cofounders is Andrew Ng's wife, apparently having a spouse AI prodigy was of little help to the problem. 

Voyage, another similar startup now wants to solve the self driving problem with Deep Reinforcement Learning. Why? Because this is the latest buzzword in AI circles. Does it make any sense? None at all, since it can only work in simulation and as we all know in theory there is no difference between simulation and reality, but in reality there is. Anyway, autonomy is all about the corner cases and the nasty thing about corner cases is exactly that they were not anticipated by anybody, and consequently cannot be simulated. 

Other self driving car players have been getting somewhat mixed press as well. Cruise is apparently plagued with glitches, one of Alphabet's execs admitted that there has been much hype in the space, while Waymo's valuation got pretty seriously (40% !!!) slashed by Morgan Stanley. Not surprisingly Waymo, which is typically rather quiet in media, rolled out a big PR offensive, started showing off cars without drivers (remote supervised) driving around Phoenix suburbs and inviting automotive journalists for a ride. I view this as an attempt to regain control over the crumbling narrative, which may be effective for a while.  For those who never visited Phoenix, the suburbs where Waymo tests their cars near Chandler AZ are pretty much the ideal case for an AV - wide streets, few pedestrians (especially in the summer when it is really hot). I visited that area this summer and saw a bunch of these Waymo cars myself, pointlessly cruising around like some lost sheep without a purpose. 

We've also learned a few new facts about the infamous Uber incident, apparently Uber AVs were involved in some 37 crashes before the fatal accident in Arizona last year,  while “The system design did not include a consideration for jaywalking pedestrians,” as we learned from a stunning NTSB report. This may explain why Uber is now looking to pay for Waymo tech while their ex AV boss-star responsible for Uber-Waymo fiasco, Anthony Levandowski got charged with trade secret theft, facing many years in prison time.  I'm old enough to remember when Anthony's startup -Otto - was delivering beer in Colorado to great fanfares, something I've mentioned in this blog before, back in 2016.  

Meanwhile Daimler joined the crowd of companies slowly deflating the self driving balloon, to the point of even admitting they'd be cutting spending on it

Tesla keeps the story up with promises of a million of self driving cars by 2020 - they raised money in May based on that promise so they need to keep it alive. While the stuff that Tesla does is in many ways impressive - recent talk by Andrej Karpathy - head of AI there -  revealed some of the details, the grim reality is that as of today even unusual illumination is able to throw the system off. To get the idea of how far way is Tesla from autonomy it's enough to search Youtube to see hundreds of failures, particularly with recently released enhanced summon. I keep my prediction that there will be exactly zero fully autonomous Tesla's in 2020, and most likely 2021, 2022 and at least 2023. I would not expect anything really usable by 2025 and that is only given that somebody finally makes a scientific breakthrough. If that fundamental shift does not happen, chances are self driving cars will remain a pipe dream for a few decades or more (aside maybe from some heavily geofenced, low speed local services, such as for example at a university campus). 

In general the sentiment regarding autonomous vehicles seems to be changing with more prominent news outlets pouring buckets of cold water on the technological hot shots e.g. [1], [2], [3]. 

Finally since we are on technological hot shots, I had the pleasure to meet George Hotz, the founder of Comma.ai and a known hacker. Earlier this year Comma moved to San Diego, since in George's own words "San Francisco is a scam". He attended the same event at UCSD where he gave a somewhat entertaining and amusing talk in which he called self driving "a scam". Anyway, Comma is "proudly delivering level 2 autonomy", which is pretty much in line with the functionality of Tesla autopilot, only their system actually has driver monitoring which I think is a huge plus. Anyway, I would not let their hackish software ever talk to the CAN bus in my car and I certainly do not recommend anyone to do so - CAN allows to essentially control every aspect of the vehicle and any code responsible to controlling the vehicle should adhere to strict safety standards such as ISO 26262. On the other hand I have a suspicion Tesla does not adhere to these standards either... 

Open AI

OpenAI (which is actually pretty ClosedAI these days) never fails to deliver pompous and grandiose PR (often crossing the boundary of straight out bullshit) and so was the case in the past few months. Firstly OpenAI did manage to strike some sort of deal with Microsoft in which Microsoft promised to invest $1B over the next few years, but the terms of that investment were not disclosed (in the spirit of openness) and rumors suggest that a huge part of the money will be funneled straight back to Microsoft in the form of Azure fees. Nevertheless, this will certainly allow them to keep rolling for a while and deliver the more and more obnoxious promises about pre-AGI technologies, something that will keep us entertained.  

I mentioned in my previous half-year update, Open AI came up with a transformer based language model called GPT-2 and refused to release the full version fearing horrible consequence that may have to the future of humanity. Well, it did not take long before some dude - Aaron Gokaslan - managed to replicate the full model and released it in the name of science. Obviously the world did not implode, as the model is just a yet another gibberish manufacturing machine that understands nothing of what it generates. Gary Marcus in his crusade against AI hubris came down on GPT-2 to show just how pathetic it is. Nevertheless all those events eventually forced Open AI to release their own original model, and much to nobody's surprise, the world did not implode on that day either. 

Meanwhile they also released their little robotics project - solving Rubic's cube using deep reinforcement learning in a robotic hand... Only the actual solving is done via symbolic methods, estimation of the state of the cube is done via bluetooth instrumentation and even then the robot fails most of the time. Again Gary Marcus took their PR statement apart since they decided to - put it mildly - not emphasize some of these embarrassing details. I don't want to go into much detail of this train-wreck, I think Gary Marcus did a great job exposing it and I encourage everyone to follow him on Twitter and read what he had to say [as well as his recent book]. All I want to say is that if after the equivalent of 12 thousand years of training, with all that instrumentation and symbolic solver on top, with state of the art superbly precise robotic hand (which probably itself is north of $1M), all we can get is 20% performance on Rubic's cube I think it is a great argument to stop for a moment, take a deep breath and ask what in the hell are we trying to accomplish and how exactly did we get here in the first place?

Other assorted news

In other news Lex Fridman, the author of highly controversial study involving alertness with Tesla autopilot [I've mentioned in more detail in my previous update] seems to have disappeared from MIT website. It is not clear if he'd been fired or suspended or whether he resigned himself, but his departure from MIT seems to be partially confirmed by his erratic tweets. I think this dude is pretty good at interviewing people about AI and perhaps that is what he should focus on. 

Element AI, one of these AI wannabe-unicorns with undefined product or service based in Canada raised a flat round and fired their CEO. Again for those not familiar with how startups work, a flat-round (or even worse a down-round) is an indication of weakness. Essentially this indicates that for the past several years the company went nowhere in figuring out their business and has to substantially dilute the shareholders (including employees) to stay in business. 

Elon Musk and Jack Ma apparently had some debate about AI in China. I had seen some cringeworthy clips of that and did not manage to find the strength to torture myself to watch the whole thing. I bet some of my readers will be disappointed by this, but please understand, I'm only human.   

Boston Dynamics released another video of their robots. Their videos are in ways introducing a new genre of art. I'm not yet sure exactly how it should be defined, but generally the clips show a robot that is absolutely impractical doing some gymnastics and humans performing some strange motions around it. They also announced they will be selling their Spot dog robot.  I'm sure there will be university labs and Hollywood studios who will buy several pieces each just for show off, but if you are looking for a robotic dog companion you will be better off with the new AIBO (plus it will be a lot cheaper even after the price bump). While we are on Boston Dynamics and their novel contribution to the world of motion picture art, another group of artists, Corridor Digital, released their own parody Bosstown Dynamics. The CGI generated clip looks much like one of those from Boston Dynamics and generally is hilarious.  Sadly, there is no shortage of clueless people who think it is actually real - just yet another proof to how naive people are when it goes to all this AI nonsense. 

DeepMind lost almost half a billion dollars last year [1] playing Starcraft I, which I guess puts the billion that Microsoft is about to burn in OpenAI's furnace over the next several years in a somewhat better light. I'm not sure if it is sunk cost fallacy at this point or is there still any play here to make some PR, boost the mother company stock price etc.. Anyway, other investors in "AI unicorns" can study these examples as an approximation of their future "gains".

I have also seen some indications of a bubble popping in the AI radiology space. As we all know, we no longer need radiologists, only it turns out we actually do. I don't follow this space that closely but Max Little on twitter may have more details on that. 

John Carmack is going to take a shot at AI. Whatever he accomplishes in that field I hope it will be equally as entertaining as Quake and equally as smart as the fast inverse square root algorithm. 

Scaling and diminishing returns

OpenAi among other things publishes some estimates on how much compute is being deployed for AI. Their picture is quite interesting:

Computing power available for AI, source OpenAI

Now what this diagram does not show, is the amount of money which went into AI in corresponding time periods. Since between 1980-2010 the growth of the models closely followed Moore's law, it would indicate the expenses on compute were roughly constant over that period. In 2010 the expenses exploded and hence the accelerated scaling. But this plot is particularly interesting when contrasted with e.g this:

or this:

Even though we are doubling the compute available to AI every 3.5 months, so a factor 10x per year, since say 2015, so some 10.000x, the classification performance as measured by top-1 accuracy on ImageNet had barely moved. Now arguably top-1 accuracy on ImageNet is probably not the best measure, but still that seems somewhat striking. The diminishing returns are visible elsewhere too, even that Open AI Rubic's cube is somewhat indicative. 

The scaling is a frequent argument in the Bay Area justifying any level obscene waste, but there are bad news brewing over the Valley. Although Moore's law seems to be ongoing, Koomey's law is slowing down noticeably. Hence we can pack more transistors, but the power bill for their use just began going up. And if that is so, any deep learning contraption which today requires millions of dollars in electricity to train, will likely require same order of expenses in the future. So the AI hyper-growth party will soon be busted by the utility company. And in the Bay Area this may happen sooner than one might expect

Summary

The whole field of AI resembles a giant collective of  wizards of Oz. A lot of effort is put in to convincing gullible public that AI is magic, where in fact it is really just a bunch of smoke and mirrors. The wizards use certain magical language, avoiding carefully to say anything that would indicate their stuff is not magic. I bet many of these wizards in their narcissistic psyche do indeed believe wholeheartedly they have magical powers...

In practice, even though there is no magic, there is a lot of useful stuff one can do with that smoke and mirror not just deception and ripping off naive investors. I'm currently working on something that certainly uses what would be called AI, lots of visual perception and is in ways autonomous, but unlike some of these other moonshots seems quite doable (doable does not mean easy!) with today's technology and moreover seems to provide a huge economical value. More on that soon, once Accel Robotics gets out of  stealth mode and we publicly announce what we are up to. Stay tuned! 

 

 

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments