AI winter - Addendum

My previous post on AI winter went viral almost to the point of killing my Amazon instance (it got well north of 100k views). It triggered a serious tweet storms, lots of discussion on hackernews and reddit. From this empirical evidence one thing is clear - whether the AI winter is close or not, it is a very sensitive and provocative subject. Almost as if many people felt something under their skin...

Anyway, in this quick followup post, I'd like to respond to some of the points and explain some misunderstandings.

Hype is not fading, it is cracking.

First off, many citations to my post were put in context that the AI hype is fading. This was not my point at all. The hype is doing very well. Some of the major propagandists have gone quieter but much like I explained in the post, on the surface everything is still nice and colorful. You have to look below the propaganda to see the cracks. It would actually be great if the hype faded down but that is not how it works. When the stock market crashes, it is not like everybody slowly begin to admit that they overpaid for their stocks and quietly go home. It happens in sudden, violent attacks of panic, where everybody tries to sell off, while at the same time, same people pump the narrative so that they could find buyers [pump and dump]. Crisis is only announced once the market truly runs out of buyers and the pumpers run out of cash. So hype is a lagging indicator (often by quite a bit). I predict it will be like that with AI. Each fatality caused by a self driving car will cut the number of VC's likely to invest in AI by half. Same for every AI startup that quietly folds down. At the same time those who already heavily invested, will be pumping the propaganda while quietly trying to liquidate their assets. It is only once there is nobody to buy this, which is long after seed financing had dried out, when the AI winter becomes official.

Applications stupid!

The main line of defense against the AI winter is that this time around AI actually brings profit and there are real applications. OK. There are applications. Primarily for image search, speech recognition and perhaps surveillance (aka Google and Facebook etc.). There is the style transfer which will certainly make Photoshop great again. But these are all almost 3 years old now. I actually went to "Applications of Deep Learning" session on last ICML conference in Sydney. Let me just put it very mildly: this was a superbly underwhelming session.
Now regarding influence on winter, it actually does not matter how much money AI brings today. What matters is how much people invested, and hence how much return they expect in the future. If the reality does not match these expectations there will be a winter. Period. The amount of investment in AI in this cycle is enormous. And the focal point of that investment is in autonomous vehicles and by autonomous I don't mean remote controlled or with a safety driver - this stuff is only economical if they are truly autonomous. Coincidentally, this is the application which I think has the smallest chance of materializing.

But Waymo!

But Waymo what? That they are buying up to 60,000 vans over undefined period of time? So what? Uber ordered 20,000 Volvos late last year. I wonder how that deal is going. But Waymo tests AV's without safety drivers! Yes, in the most quiet and slow block in Phenix with perfect cellular reception such that they could constantly monitor these cars remotely. Oh and BTW, they have a speed limit of 25 mph... Anyway long story short: Waymo can deploy even a million LTE monitored, remote controlled cars. That proves nothing regarding autonomous car, because such deployment will happen at a massive loss. Obviously Google can pump money into them for as long as Google has the money, which will probably be for a while. Google AV project has been around for 10 years, I expect it to go for another 10 years. Unless they hit and kill somebody. At this point they are done. And that is why they are extremely cautious.

A few recent examples of Deep fail

After stirring a violent tweet storm with my post, a few very interesting papers came out and a few other had been brought to my attention:

  1. Do CIFAR-10 Classifiers Generalize to CIFAR-10? A study shows that by generating a new test set, the performance of models across the board drops substantially. This highlights the very well known and yet constantly ignored and brushed under the carpet problem of data snooping. Long story short: to have a unbiased performance number of your classifier you can only use your test data once. But if that is so, and each time you devise a new classifier, you have to test it with new data, your results are not reproducible anymore! Well that is statistics children, it has that nasty probability in it, I'm sorry. You can just test your model on a fixed set aside test set and the results are 100% reproducible, but they are biased. Pick your poison.
  2. Semantic Adversarial Examples The previous paper showed that the models are not very robust even if tested on new samples carefully chosen to resemble the original training distribution, it should not be surprising that DL is not robust to samples from the outside of the original distribution.
  3. Why do deep convolutional networks generalize so poorly to small image transformations? and A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations Apparently the translational and rotational invariances of deep nets are slightly overrated. You can actually see some of the reported behavior in my ancient (mid 2016) post on vision, where I applied a SOA deep net to a few videos I took with my phone.

  4. Last but not least: One pixel attack for fooling deep neural networks If you thought that failure to slight shift or rotation or hue change was already bad enough, wait till you read this one. It is enough to tweak one pixel...

These combined with good old gradient derived adversarial examples just exemplify how brittle these methods are. We are far away from robust perception and in my opinion we are stuck in a wrong paradigm and hence we are not even moving in the right direction.

Happy reading!

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments