AI circus, mid 2019 update

Introduction

It's been roughly a year since I posted my viral "AI winter is well on its way" post and like I promised I'll periodically post an update on the general AI landscape. I posted one some 6 months ago and now is time for another one. And there has been a lot of stuff going on lately and none of it has changed my mind - the AI bubble is bursting. And as with every bubble bursting we are in a blowoff phase in which those who have the most to lose are pulling out the most outrageous confidence pumping pieces they could think of, the ultimate strategy to con some more naive people to give them money. But let's go over what has been going on.

The serious stuff

Firstly let's go over the non-comical stuff. Three of the founding fathers of deep learning - Geoffrey Hinton, Yoshua Bengio and Yann Lecun - received a Turing award - the most prestigious award given out in computer science. If you think that I will somehow question this judgement you will be disappointed, I think deep learning is well worth the Turing award. The one thing that in my personal opinion leaves a bit of a bad aftertaste is the omission of Jurgen Schmidhuber. Think what you wish about Schmidhuber - I'm inclined to agree that he can sometimes be awkward - but his contribution towards the field of deep learning is undeniable.

By the way all these guys have been rather quiet, Hinton did join twitter finally, but in his modest style did not tweet anything I would consider untrue or overly enthusiastic. Yann Lecun occasionally promoted his research at FAIR, but nothing unusual either. Similarly Bengio who is not active on social media.

My other favorites were quiet too, Fei Fei Li left her sabbatical at Google cloud last fall and went back to her position at Stanford, Andrew Ng was unusually quiet: perhaps because he recently had a baby - I congratulate him and his wife sincerely, having a baby myself just two months ago I know what a joy that is, but also recognize that a baby may stand in a way of his 90 hour work weeks and cause a delay for the inevitable AI singularity.

The most hilarious set of events over the past few months in AI circulated around Open AI and Tesla.

The comical stuff

Open AI - a non-profit organization with a mission of solving the problem of Artificial General Intelligence (AGI) and making sure that this discovery remains open to the general public and not in the hands of some vicious corporation to profit from it, had released a text generation model GPT-2 in February. To everyone's surprise they did not release the trained weights citing concern over possible misuse or something stirring an obvious controversy among the researchers and the AI crowds. I'm not sure how do these guys can claim that they are "open" and yet they do not release a crucial part of the model (and BTW as far as I'm concerned until they release the full model, GPT-2 can just as well be implemented using a mechanical Turk). Even though GPT-2 generates reasonably looking text, I'm not sure how one could abuse it to generate fake news or spam, or really use it for anything beyond amusement.

Anyway, recently Open AI which is apparently no longer open, came out with an idea of going for profit. Yes - the organization that was supposed to be the Prometheus of the XXI century, the monastery of unbiased and fair-minded researchers working hard towards providing humanity as a whole with the flame of AI is no longer open and now is actually for profit. But wait, they still hold on to the grandiose mission because the profits will be capped - every investor can recover only 100x of what he puts in, sweet. Now the only behind the scene's reason I can think of for all this is that they can no longer raise money as a non-profit.

Let's for the moment put aside the fact that the company so far has made exactly zero profit and is not structured like a startup by any stretch of imagination (but like a research lab). Which actually leads us to an amazing interview summarized in an article in which Sam Altman - a well connected dude in Bay Area who used to run Y-combinator and now became the CEO of (no longer open) Open AI provides us with some of the best quotes of this hype cycle. I encourage everybody to read/watch the whole thing, but let me just quote the best part:

Asked for example, how OpenAI plans to make money (we wondered if it might license some of its work), Altman answered that the “honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue.”

Continued Altman, “We’ve made a soft promise to investors that, ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.'” When the crowd erupted with laughter (it wasn’t immediately obvious that he was serious), Altman himself offered that it sounds like an episode of “Silicon Valley,” but he added, “You can laugh. It’s all right. But it really is what I actually believe.”

Yeah. I don't know what to add. I'm sure figuring out how to generate a revenue should be much easier than figuring out AGI, but not in the twisted logic of a hype bubble. In a nutshell the proposition is : I believe we can build AGI, though I really don't have any evidence for it, but I really strongly believe we can do it if you give us billions of dollars.

I wish I could believe this too, but unfortunately I don't and I think Open AI has turned into a total scam. This judgement is further reinforced by looking at what some of these Open AI people tweet, take e.g. this:

Wojciech Zaremba, a guy who used to be very rational back when he was at NYU ( I exchanged a few emails with him back in 2015), has now became a "believer". There is so much wrong with this tweet above, it is even hard to imagine that anybody who has any understanding of data and reality and has any integrity as a scientist could say something like this. But this is what these seemingly smart people believe in the Bay Area echo chamber and thousands of young freshmen in the field follow it blindly.

To be concrete: there are millions ways in which the above described approach (I'm not even sure exactly what "approach" means) may not work (and plenty evidence out there that it indeed does not work). One potential reason: even if they had all that data, it is very likely biased because people explicitly avoid edge cases. Another potential reason: the edge cases are a very sparse set, statistically overwhelmed by the "non-edge cases". Deep learning while simply optimizing total loss over all data points may completely wash corner cases out, and each corner case may need it's own data rebalancing/loss function tuning. Another reason is that even with 0.5 million Teslas on the road and even if they collected all the driving data (which they don't), this could easily be not enough to cover all the edge cases, if the edge cases are long tail and non-stationary (another way to imagine it, you can train a model on historical stock market data all you want and when you put it to work with real money you will be bankrupt faster than you can imagine). Another reason is that the set of edge cases with humans on the road is different than a set of edge cases with AV's on the road and is indeed constantly changing. Another, perhaps least obvious reason is that the particular deep net they are using may not have the expressive power to represent what it needs to represent (even assuming they have all the data they need to train it). Perceptrons were known since the 80's and are universal approximators and yet it took very specific fine tuned architecture called the convnet, along with a bunch of tricks to improve convergence and sufficient number of trainable parameters to solve the ImageNet. Even today you can't just take any generic all connected multilayer-perceptron and expect it to learn whatever task you ask of it. Even if you have a ton of data it takes lots of tuning and meta-parameter searching for anything to work out. And finally, even if in principle this were all doable, it is not clear whether it would be doable within the confines of the computer built into a Tesla. Others in AV space had stuffed the cars with LIDARS, which although expensive, solve the majority of what Tesla tries to solve with AI (such as obstacle avoidance and traversability), full blown gaming GPUs offering way more compute power than even the latest Tesla hardware and yet according to the data we have it is clear that nobody is even close to full autonomy. So there you go, just a handful of reasons why Tesla's approach to autonomy does not "have to work", reasons every data scientists worth their salt should be able to recite even straight after drinking a bottle of vodka.

Speaking of Tesla autonomy day, some pretty bold statements were made there such as that Tesla will be having 1M autonomous taxis in 2020 of which the chances are zero in my opinion. I've already expressed my take on Elon Musk and his crazy promises and ideas, I don't want to spend too much on that in this post (frankly I don't want to attract Tesla fanboys, I'll let them live in their own fantasy world). But let's focus on Tesla from another point of view since some interesting development happened earlier this year: Lex Fridman, a research scientist at MIT released a study in which he claimed (along with several co-authors) that contrary to a mountain of literature about human machine interaction, drivers using autopilot remain vigilant and attentive. The release of the study itself was surrounded by some controversy - first Fridman started soliciting journalists to cover this upcoming release. Some scientists such as Anima Anandkumar (research director at Nvidia) tried to encourage him to submit this research to peer review before making any flashy splashes, for which she got famously blocked by Lex on Twitter along with everyone even remotely critical of his approach (must be quite a snowflake this Fridman). Once the study made the headlines (let's emphasize: an unreviewed study), Fridman (who is a quite open Tesla fanboy) tweeted that neither Tesla nor Musk had anything to do with this (positive for Tesla) study, subsequently deleted those tweets and then tweeted something about integrity (perhaps because he rightfully felt that it is being questioned), while two weeks later he got invited to do a podcast with Elon Musk and then he himself got invited to Joe Rogan. He used both of these occasions to shamelessly promote himself, which is what he does all the time anyway.

I won't go here into the details of the paper itself, but I will point my readers to a great podcast interview with Missy Cummings a professor of human/machine interactions at Duke who goes into great details on the many ways this MIT study is completely flawed.

Nevertheless given all the developments described above I officially name Lex Fridman as a candidate for the biggest clown of AI scene ad 2019. Even though it is only mid year and the competition is still open, I think his chances of winning are pretty good.

While we are on Tesla, the NTSB released their preliminary report of the Florida fatal crash indicating (to nobody's surprise) that autopilot was indeed involved in this incident. This is at least fourth documented autopilot fatality, surprisingly similar in circumstances to the famous Joshua Brown accident. While this is perhaps not a lot in the grand picture, it should be noted that a similar ADAS system from Cadillac - Super Cruise - as of the time of writing this post scored exactly zero fatalities (and not even a single crash that would look like being caused by the fault of the system). The subtle difference is that Super Cruise has a driver monitoring system, which assures the driver is vigilant, while Tesla does not, this all being yet another anecdotal evidence that Fridman's study is... what it is.

In other news, since my previous AI update two more AI/robotics companies went out of business - Jibo and Anki. Jibo was a particularly interesting case for me personally, since I knew how this will end the moment I saw their Indiegogo campaign. Founded by an MIT researcher Cynthia Breazeal, the project instantly gained credibility, promised functionality which back then and even now is simply impossible and managed to pull (and incinerate) some $72M. I'm not sure what exactly is going on at MIT right now, first Cynthia Breazeal - a claimed expert in robotics promised all those Jibo miracles apparently not being aware that most of them are not even on the drawing board and then Lex Fridman makes a clown of himself (though to MIT's credit, he is just a hired research scientist, not an MIT graduate).

Aside from that, reportedly the government of Ontario decided to cut spending on AI, among others to Hinton's Vector Institute causing quite a bit of outrage in AI circles. This is really only the first serious frostbite of the incoming winter, I suggest to all those outraged by this to look for warm clothes and perhaps a different job.

Summary

So there you go, the state of AI in mid 2019. As I expected the scene is becoming more and more absurd, essentially a clown show. I think we will soon learn that all that AI bubble was indeed just a joke, something I've been saying in this blog from the very beginning.

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments