I've started contemplating this post in mid February 2020 while driving back from Phoenix to San Diego, a few miles after passing Yuma, while staring into the sunset over San Diego mountains on the horizon hundred miles ahead. Since then the world had changed. And by the time you'll read this post a week from now (early April 2020) world will have changed again. And by the summer of 2020 it will have changed several times over. I won't go here too much into the COVID-19 situation, since I'm not a biologist, my personal opinion though it that it is a real deal, it's a dangerous disease spreading like wildfire, something we have not really seen since the Spanish Flu of 1918. And since our supply chains are a lot more fragile, our lifestyles a lot more lavish and everybody is levered up, it has all the chances of causing an economic havoc unlike anything we've seen in the past two centuries. With that out of the way, let's move to AI, as certainly the economic downturn will have a huge impact there.
(not)OpenAI
Let's start with the article in technology review that came out in February going deeper inside the OpenAI, which by now is entirely closed AI lab owned by Microsoft. This article was what prompted me to start this post. There are several somewhat apparent conclusions that can be drawn from the story:
- These guys have no idea how to build AGI or even what AGI might look like. That is stated somewhat explicitly in many parts of the article, what they are indeed doing is attempting to expand and scale current techniques (mostly deep learning) to see how far could they go by applying progressively more ridiculous computing resources to some random tasks. The difference is like between calling a fireworks shop a "moon rocket laboratory", just because they try to build the biggest firework possible.
- Since they have no idea what they are doing, and there is really no vision of what they want to accomplish - hence the leadership is inherently weak and insecure - the blank is filled with semi religious, and never questioned "charter", recited at lunch by the priests of the congregation. Full loyalty to the charter is expected, to the point of even varying the compensation by the level of "faith" .
- The entire organization appears intellectually weak [to be fair, I'm not saying everyone in there is weak, there are probably a few brilliant people, but the leadership is weak and that inevitably drags the entire organization down]. The lack of any appreciable understanding and a vision of what AI might be and how one could possibly get there is replaced with posturing and virtue signaling. Notably, the regulars are not allowed to express themselves without the censorship from the ruling committee out of whom only Ilya Sutskever has any actual experience in the field and guys like Greg Brockman or Sam Altman being semi successful snake-oil salesmen.
Clearly this environment is as conductive to free thinking as a medieval monastery in the darkest of ages. The article also illustrates how their idealistic charter is slowly colliding with economic reality. In fact I believe the Coronavirus and the resulting economic instability may accelerate that collision very substantially.
This article confirms everything I've ever suspected about this organization, pretty much summarized in the points above. It is a egregious money grab disguised in some "save the world" fairy tale and legitimized by frequent media stunts, which under more detailed scrutiny often turn out to not be what they were initially advertised. In simple terms let's call it for what it really is - a fraud.
Signs of disillusionment in the valley
Back in February, before Silicon Valley pretty much completely shut down for business, one of the most prominent VC's - Andresseen-Horowitz posted a seemingly boring post on whether AI companies should be viewed more like software startups or rather service companies. Another blogger Scott Locklin took A16Z post apart and did an excellent job of stating out loud some of the things written between the lines in the original article.
Some of my favorite quotes from the article are:
[from A16Z post:] Choose problem domains carefully – and often narrowly – to reduce data complexity. Automating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected. Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.
[Comment by Scott]: This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.
Could not really say it better myself. I fully concur, my own personal experience is very similar and I would agree with most of the quotes from the commentary. At AccelRobotics we realize all of that, the "AI" part of our solution is maybe 10%-15% of all the technical ingenuity that goes into getting an autonomous store to work, and often it is not "deep learning pixie dust" but a lot simpler and more reliable methods, only applied to more strict and better defined domains [that said DL models have their place too]. It is often better to invest resources in getting slightly better data, add one more sensor, than train some ridiculously huge deep learning model and expect miracles. In other words, you can never build a product if all you focus on is some nebulous AI, and once you focus of the product, AI becomes just one of many technical tools to make it work.
In the end, Scott concludes:
This isn’t exactly an announcement of a new “AI winter,” but it’s autumn and the winter is coming for startups who claim to be offering world beating “AI” solutions. The promise of “AI” has always been to replace human labor and increase human power over nature. People who actually think ML is “AI” think the machine will just teach itself somehow; no humans needed. Yet, that’s not the financial or physical reality. (...)
Given this was written in February, before the impact of coronavirus was not yet fully appreciated (and likely even at the time of writing of this post it is still not fully appreciated), there is a substantial probability of a general "winter", not just AI. The entire post is a very good and quick read and I think most of my readers will enjoy that one too.
Atrium raise and fall
While we are on Andresseen-Horowitz, Atrium raised 65 million from them in September 2018 to great fanfares. Much like with many other of these miracle AI startups, Atrium promised to "disrupt" legal services and replace lawyers with AI - never really explaining how and what might that look like. But the founders were connected enough (Justin Kan, the CEO had been known for selling Twitch to Amazon for over $1B), went through Y-combinator - a central arena for the Bay Area echo chamber run by some prominent clowns such as Sam Altman (currently proudly leading OpenAI). Fast forward to 2020 and ... they are shutting down. I guess lawyers, along with truck drivers will stay in business for a while.
NTSB report on Tesla autopilot crash
NTSB (National Transportation Safety Board), released a report on another Tesla autopilot crash [full hearing available here], the one in which 38 year old Apple engineer - Walter Huang burned to death after his model X crashed into a center divider [actually as one of my friends pointed out, he has been pulled out of the car before engulfed in flames and died off of injuries]. The conclusion of the investigation found what everyone had suspected from the beginning - the crash was caused by the autopilot error, while the driver was distracted and playing on his phone. The investigation also noted that the highway attenuator was damaged and not fixed on time (if it was in proper condition the crash would likely be less severe). The whole report is pretty damming to Tesla for not providing sufficient means to detect if the driver attends the road and misleading marketing suggesting that "autopilot" is indeed an autopilot. NHTSA got some blame for not following up with NTSB recommendations after previous Tesla crashes and the entire hearing was closed with a remark from NTSB chairman Robert M. Sumwalt:
"It's time to stop enabling drivers in any partially automated vehicle to pretend that they have driverless cars. Because they don't have driverless cars." - Chairman of NTSB
Of course what they should have done was to take autopilot off the road until satisfactory mechanisms are in place. Instead they watered down their report by stating that companies such as Apple should limit the way in which the drivers can use cells phones in cars while driving. This is somewhat ridiculous, since it is nearly impossible for a cellphone to detect if it's being used by the driver or a passenger and leaves an aftertaste of implication that Apple is to be blamed for the accident equally as much as Tesla, which is a complete nonsense. Tesla is the company that supplied a system allowing the driver to be distracted and to act as if he had an autonomous car. Tesla supplied the misleading marketing and Tesla did not provide adequate driver monitoring system. Whatever else was the driver doing is irrelevant. If he was shaving when the crash happened, no one in their right mind would even suggest to blame Gillette for the crash.
None of this either way stops Elon Musk from reiterating the promise of robotaxis in 2020 (which as I've expressed earlier [1],[2], has the same chance of happening as the autonomous coast to coast drive in 2017 and Moon flyover in 2018):
All that while the most recent Tesla software still mistakes truck tail lights for stop signal lights (this reminds me of my old post here) while reporting 12 - yes you read this right - twelve (!) autonomous miles in 2019 in California. The response to the tweet with "Full Self Delusion" is very accurate here. Aside from the fact, that it has been noted million times already, there are currently no regulatory approval process for deploying (not testing, that is regulated - I know it is counterintuitive) self driving cars in the US and nobody in the field knows what Musk is referring to when he mentions the regulatory approval.
And speaking of regulators apparently, while NHTSA keeps sleeping at the wheel with respect to Tesla as their vehicles keep rear-ending fire trucks, they had no problem suspending an experimental autonomous shuttle service while one of the passengers fell from a seat... Talk about double standards...
Starsky crashing down to earth
Earlier this year rumors showed up indicating that StarSky robotics is distressed and laying off most of their staff. Soon thereafter the company confirmed it is shutting down and did it with a hell of a splash. Their CEO Stefan Seltz-Axmacher released a medium post which is a gold mine of first hand observations of that industry and technical capabilities of the AI pixie dust. With honesty and integrity rarely found in Silicon Valley, he went in and said what many were whispering for a while - AI is not really "AI". Some of my favorite quotes from that post (though I encourage my readers who haven't yet seen it to certainly read it):
There are too many problems with the AV industry to detail here: the professorial pace at which most teams work, the lack of tangible deployment milestones, the open secret that there isn’t a robotaxi business model, etc. The biggest, however, is that supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool.
After the post thundered through the AI community, Stefan got invited to Autonocast where he expanded and explained in more detail the story behind StarSky, that podcast is worth listen as well. In essence he notes that really no one has an "artificial brain" that could drive a car in all conditions, there will have to be a human in the loop here for a long time. And the entire approach with training supervised models is seemingly approaching an asymptote way too early to be deployable. Something I've been writing about in this blog for years.
And while we are on stars, the fallen star of the autonomous car industry, Anthony Levandowsky filed for bankruptcy, and very likely will end up in jail for stealing intellectual property from Waymo. And speaking of Waymo...
Waymo's self deflating valuation
Last year Waymo enjoyed a ridiculous valuation of $175 billion, which last fall got slashed to $105 billion by Morgan Stanley. Last month they've raised their first outside round, $2.25 billion at a valuation of $30 billion. To put this into perspective I took the liberty to make the following plot:
If the trend were to continue they should be worth zero at some point mid 2020, 2021 the latest. Which given the coronavirus havoc might not be that far from reality. Others have also noted that raising a round at this point indicates they are far away from any ability to make money off of this endeavor.
$30B is still an astronomical valuation for a company which cannot even supply enough self driving rides on a sunny day in Phoenix for a three hour fest with a few hundred people (this is my first hand experience), but given the rate of deflation, their valuation will soon reflect the actual enterprise value of their business.
Others in that space are also struggling with Zoox laying off all of their test drivers. The word on the street is that Zoox has been out looking for money for over a year now and in the new economic reality might converge to zero value even faster than Waymo.
The only successful raise aside from Waymo (and an actual up-round) was Pony.ai which raised $462 million (mostly from Toyota) in February at $3B valuation. I would not be surprised if these - Waymo and Pony.ai - were the final rounds in the financing rush in this business for a long time. I expect a lot of this self-driving enthusiasm to fade away once the economy starts really hitting the post COVID-19 reality, but we will have to see how that unfolds.
Deep learning in clinical applications
There was some buzz about deep learning replacing radiologists, nonsense initiated by Hinton and then promptly repeated by Andrew Ng. Since then there's been a fair disillusionment in that area and recently a paper got published studying the actual amount of trials done to validate any of these extraordinary claims. The whole paper is available to read, let me just pull here a few nuggets from the conclusion section:
Deep learning AI is an innovative and fast moving field with the potential to improve clinical outcomes. Financial investment is pouring in, global media coverage is widespread, and in some cases algorithms are already at marketing and public adoption stage. However, at present, many arguably exaggerated claims exist about equivalence with or superiority over clinicians, which presents a risk for patient safety and population health at the societal level, with AI algorithms applied in some cases to millions of patients. Overpromising language could mean that some studies might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients’ best interests.
And then subsequently:
What this study adds
Few prospective deep learning studies and randomised trials exist in medical imaging
Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards
Data and code availability are lacking in most studies, and human comparator groups are often small
I'll leave that here without further comment.
CNNs in a toilet (literally)
Last but not least, in what at first sight looked like a joke, a group at Stanford published a paper in Nature Biomedical Engineering (!) about a camera equipped toilet seat which using various sensors and multiple cameras analyzes excrements and well as the butthole and monitors these for signs of health problems. I'm actually not against such solutions (though having three cameras in a toilet seat seems like something that may cause some minor privacy issues), but I think having this be published in Nature and paraded as some groundbreaking "research" is misplaced. If some startup company wants to build such a device and sell it, get it FDA approved, patent it, and if some people want to use it, I'm all for it. But doing all this only to get it published in Nature (journal which BTW will publish any clickbait research title, but zero replication studies) just to me personally seems out of place.
Summary
The AI pixie dust is vanishing as rapidly as Waymo valuation. The realization that deep learning is not going to cut it with respect to self driving cars and many other applications is now an open secret. The AGI tech bros may find some comfort in that Hinton, LeCun and Bengio don't foresee any AI winter on the horizon but the events unfolding recently paint a different picture. Given the rapid spread of coronavirus and many unknown consequences of it (at the time of writing this article there were >0.5 mln cases in the USA and 22k deaths, 16mln freshly unemployed), the winter may be a lot quicker and a lot more general (not just AI), than what anyone could have expected.
If you found an error, highlight it and press Shift + Enter or click here to inform us.