A brief story of Silicon Valley's affair with AI

Once upon a time, in the 1980's there was a magical place called Silicon Valley. Wonderful things were about to happen there and many people were about make a ton of money. These things were all related to the miracle of a computer and how it would revolutionize pretty much everything.

Computers had a ton of applications in front of them: completely overhauling office work, enabling entertainment via computer games and changing the way we communicate, shop and use banking system. But back then they were clumsy, slow and expensive. And although the hope was there, many of these things wouldn't be accomplished unless computers somehow got orders of magnitude faster and cheaper.

But there was the Moore's law - over the decade of the 1970' the number of transistors in an integrated circuit doubled every ~18 months. If this law were to hold, the future would be rosy and beautiful. The applications would be unlocked for which the markets were awaiting. Money was to be made.

By mid 1990's it was clear that it worked. Computers were getting faster and software was getting more complex so rapidly, that upgrades had to happen on a yearly basis to keep up with the progress. Every new generation of CPUs was very noticeably faster than their predecessors and every new generation of software products was significantly slower and more capable than their predecessors. Overall, by regularly upgrading to new hardware, the software seemed to work equally as fast (or faster) yet offered more and more capabilities. Most of the gains were due to increased clock speeds (early 1990 started with 33Mhz clocks, by 2000 clocks above 1Ghz were available), hence nothing had to be rewritten to make use of the increased computing power, everything would just work much faster on a new computer. Things were good.

By late 1990's a new kind of processor made it's way to homes: a Graphics Processing Unit (GPU). These processors were somewhat different from regular CPUs. Optimized to perform 3d graphics rendering, they had chips with a number of small cores performing work in parallel. Initially they were sold as additional accelerators (3dfx voodoo), but soon were integrated with regular graphics cards (Nvidia - Riva TNT). Games started looking much better and ran much faster. This again allowed Silicon Valley tycoons move more silicon into the households.

A view inside of a computer in late 90's (left) and 20 years later (right), similar in size and components but likely a few thousand times more powerful.

But by early 2000 things started to look a little different. The dot com bubble just burst, many people lost a lot of money. Moreover the good old strategy of increasing clock speeds hit some major roadblocks: in order to increase speed of switching, the operating voltage of circuits had to be kept relatively high. That in turn, caused the chips to heat up. Subsequent speed gains became limited by the ability to dissipate the heat. In order to keep the party going, CPU manufacturers started increasing the number of execution cores in their chips. But at this point much of the software had to be rewritten to take advantage of that parallelism. Things stopped being magically much faster with every new generation.

But worse of all, things for the most part did not have to be faster anymore. Much of the software stack has matured, applications solidified and people no longer needed a new CPU or 2x more memory every year. Things had saturated. Majority of office work today could be done on a $35 Raspberry Pi. Even the gaming segment got largely saturated with game consoles. These consoles were sold below cost, with initial investment being recovered from fees hidden in games. Consoles offered convenience, easy interface and satisfactory gaming experience to vast majority of people. This was a problem for Silicon Valley, things started slowing down.

By mid 2000's another handy invention came to the rescue - the smartphone. And while the PC market began noticeable slowdown, this new category of product was booming rapidly, with a culmination in a form of various models of iPhones introduced between 2007 and now. With smartphones, the rate of progress was not so much focused on CPU speed, but on power usage (battery life) and the quality of sensors/screen. Cameras and screens have indeed made a great progress over the last 10 years, but aside from one key player - Apple, not much of that new revenue was available to Silicon Valley tycoons. Instead the Valley focused on software side of things in the form of service companies such as Uber, Netflix and other apps utilizing the new platform.

But from the get-go it was clear that the smartphone revolution fire will not burn forever. Indeed, by 2018 most people realized they don't need to get a new $1000 smartphone every two years and much with like PC computers earlier, an older model does just fine for most applications. This resulted in Apple stock taking a huge hit in the fall of 2018, bringing the company back to well below $1T valuation.

With various markets drying out, the Valley needed something new. Something that would be as big as the PC revolution of the 90's. Something that would enable completely new applications, allow to invade and disrupt new industries. Something that would reignite the need to move more silicon - create the need for orders of magnitude of compute power. By 2012 two such potential opportunities appeared: the blockchain and Artificial Intelligence (AI).

With blockchain (incarnated initially as bitcoin in 2010), the idea was to entirely replace the financial system by removing the ledger (banks) and provide a self certifying means of establishing remote transactions. The way blockchain was constructed additionally required a lot of computing power, to calculate the so called proof of work. This is all Silicon Valley could have wished for: a new, highly lucrative application space that in addition required a ton of new silicon for compute requirements.

AI appeared on Silicon Valley radar roughly in 2012, when an unknown Canadian gentleman named Geoff Hinton who worked over the previous 30 years in the esoteric space of connectionists models and neural networks along with his students absolutely killed the ImageNet object classification competition using a deep neural network implemented on a GPU [though the credit should also go to Jürgen Schmidhuber who had been winning competitions with GPU based conv-net in 2011 already]. Like with the blockchain, this technology could open a set of new applications and required a lot of new silicon to be sold. Many in the Valley quickly saw the potential and the money started flowing in a wide stream.

Neural network academics who were sitting dormant in their university rooms since the last neural net winter of the 90's quickly noticed the opportunity. Suddenly, they were invited for a Silicon Valley party. This new incarnation of multilayer perceptrons just kept on giving new exciting results for a while: object recognition and segmentation, speech recognition, machine translation just kept getting better and better. These new capabilities were quickly absorbed by companies with a huge supply of data such as Google or Facebook. But the excitement quickly spun out beyond these very real but arguably not very exciting applications.

Scientists, who are trained professionals in overpromising in grant proposals, now had a much better customer: venture capitalists. And these guys loved listening to tales of glorious future of things that will revolutionize everything, their bullshit detector was set way below that of government funding agencies and a paper published in NIPS conference waived the need for any due diligence. Furthermore AI is a space in which these fairy tales can be easily taken to a next level by adding a few drops of imagination-driving fiction, make comparisons to some well known sci-fi movies, make it look like we are in front of some amazing inflection point, where AI will essentially reach a point at which it will improve itself beyond what we could ever imagine - the singularity. This causes the ultimate fear of missing out (FOMO). In the past even those government funding agencies referred above fell victim to AI promises, each time leading to a freeze in funding, so called AI winters.

But the Valley bought it and created the biggest AI summer party ever. They bought it without a blink. And the R&D centers, non-profit labs and startups started swelling with deep learning scientists pumped fresh out of college, often without any industrial experience whatsoever. Startups bloomed, promising all sorts of wonders in the space of robotics, autonomous vehicles, autonomous drones etc. And the solution to all the problems was supposed to be deep learning - simply deeper, trained on more data and bigger GPUs. It was supposed to just magically work, just needed more data and more compute. The party was on.

By 2018 a few started to realize that things may not work out this way. Most of the "real world" benchmarks in object recognition or segmentation started showing strong signs of diminishing returns. Trained on massive amounts of data and extremely powerful machines, these models are showing only modest gains in performance, and in some cases no performance gains at all. And the scientists, did what they know how to do best: instead of making products, they were cranking out lots of papers, some of them investigating the surprising limitations of the new technology.

Of all the directions where deep learning was supposed to bring revolution, only one kept on bringing good results - playing games. This is because games can be implemented on a computer and could generate orders of magnitude more data than would be practically possible to obtain and label in any real world application. In many cases just to train an agent playing one of these games can cost hundreds of thousands of dollars (just electricity and compute hardware). But the same trick does not work with real world problems where labeled data is expensive, and often does not even fully represent the corner cases of the problem at hand. And in all this AI frenzy, an old and forgotten observation of Hans Moravec, the so called Moravec's paradox, became more clear and obvious than ever.

Although deep learning enabled many new things in the broad area of computer perception, it did not even scratch the general problem of AI. And even in more down to earth applications of computer perception where huge amounts of labeled data are not readily available, well designed and optimized solutions based on classical algorithms with hand-crafted features are easier to develop and perform more predictably in field applications.

The litmus test of advances in AI is the self driving car development. By 2016 many people in the Valley convinced themselves that this technology is practically ready to deploy and will be one of the key pillars enabled by deep learning - after all cars already could drive for miles without an intervention. This conviction was so strong, that Tesla, Silicon Valley based car manufacturer even started selling the feature as an over the air upgrade (an update that was yet to come). By early 2019 these claims were substantially tempered, and several lawsuits over selling vaporware were in progress. In the meanwhile, 2018 was a rough year for many autonomous vehicle companies, with a fatal crash is Arizona caused by Uber AV and several fatalities in Tesla's autopilot related accidents.

At this point, even in the Valley people began to slowly realize that a full self driving car, driving everyone around like a taxi is still a very distant future. Obviously cars will continue to have computers built into them, in that sense the Valley had won, but this is a far cry of the earlier dreams.

In 2018 the price of bitcoin plunged from almost $20.000 to below $4.000, a more than 80% drop.

In 2018 another of the big Silicon Valley's bets faced a serious blow - bitcoin - the flagship of the blockchain - collapsed in value more than 80%. Many people lost a lot of money and the general enthusiasm for cryptocurrencies plunged.

Both deep learning and blockchain are very interesting technologies and they did enable things impossible before. Google image search is much better than it used to be. Style transfer filters are a very cool tool, which made the featured image of this post possible, among other things. Machine translation is now good enough to find your way in a foreign country, but still far away from translating poetry. But neither of these improvements seems big enough win, to justify Silicon Valley big bets. Neither of them looks as big and profitable as the crazy computer rush of the 90's.

As for AI, this hype cycle is not very different from the previous ones. We make computers do things seemingly only adult, educated humans can do, but then realize these same computers cannot handle the things infants or animals take for granted. And for as long as we keep falling that same trap, AI (as in the actual, general AI) will remain a pipe dream. Many posts in this blog are going deeper into those problems and propose solutions [1], [2], [3], [4], [5] among others.

Nvidia is one of the Valley companies that is selling the shovels for both blockchain and AI gold rushes. The stock action over the last four years reflects both the high hopes as well as the ongoing disillusionment.

It is hard to predict the future, but it looks like both of these bets were dead ends, at least for now. And perhaps like after the dot com crash companies like Google and Facebook emerged, the end of the current hype cycle will give rise to similar jackpots in space of blockchain or so called AI. But like then, only very few will win, while many will lose big time.


If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments