For some reason, people love to be scared. People also love to spook other people, that gives them the sense of advantage, power. And in the hyper-stimulated era of social media and numerous dopamine shots per hour this dynamics takes on a completely new scale. It turns into a psychotic rumbling between complete and unquestionable excitement only to turn in minutes to fear mongering utter dystopia. This has been going on with Artificial Intelligence over the last decade and it became almost painfully absurd. Social media "influencers" ride this bipolar mania to their advantage while normal people are being fed completely unrealistic view of reality - both far too optimistic and ridiculously dystopian. Much like with food - put a load of sugar, and to balance it out throw a ton of salt and the thing tastes better. Is horrible for health but does taste better. In this blog for quite a few years now I was trying to smuggle in some healthy diet for a change. Call out the hype, discuss outstanding problems and possible solution. So let's dig in and digest some salad and veggies.
First of all, let's just not use the word A.I. A.I. is a … Read more...
I was raised in a rational family, with strong belief in science. When I got my masters degree a was pretty much convinced that we generally know everything about the world and that science is more or less a complete endeavor. By the time I got my PhD however, my confidence dropped quite significantly. Now almost 13 years after my PhD defense, my view is that science is actually a rather fragile thread we use to hold together and explain various mysteries in the world. And that is not to say science is not the right method - it is! But I now view science as any other social activity, being influenced by zeitgeist, politics, fashion, financing and often stuck in a dogma, no different than the dogma that threatened Galileo or Copernicus. In fact in many ways, contrary to popular belief, I believe todays science is a lot more dogmatic than in the early XX century and probably worse than it's been during the enlightenment. Let me discuss a few areas where in my opinion the mainstream science is stuck in a dogma and let me highlight some interesting alternative theories that may be able to challenge the status … Read more...
In the modern era of computers and data science, there is a ton of things discussed that are of "statistical" nature. Data science essentially is glorified statistics with a computer, AI is deeply statistical at its very core, we use statistical analysis for pretty much everything from economy to biology. But what actually is it? What exactly does it mean that something is statistical?
The short story of statistics
I don't want to get into the history of statistical studies, but rather take a birds eye view on the topic. Let's start with a basic fact: we live in a complex world which provides to us various signals. We tend to conceptualize these signals as mathematical functions. A function is the most basic way of representing a fact that some value changes with some argument (typically time in physical world). We observe these signals and try to predict them. Why do we want to predict them? Because if we can predict a future evolution of some physical system, we can position ourselves to extract energy from it when that prediction turns out accurate [but this is a story for a whole other post]. This is very fundamental, but in principle … Read more...
It's time for another post in the Tesla FSD series, which is a part of a general self driving car debacle discussed in this blog since 2016 [1,2,3,4,5,6,7]. In summary, the thesis of this blog is that AI hasn't reached the necessary understanding of physical reality to become truly autonomous and hence the contemporary AI contraptions cannot be trusted with important decisions such as those risking human life in cars. In various posts I go into detail of why I think that is the case [1,2,3,4,5,6] and in others I propose some approaches to get out if this pickle [1,2,3]. In short my claim is that our current AI approach is at the core statistical and effectively "short tailed" in nature, i.e. the core assumption of our models is that there exist distributions representing certain semantical categories of the world and that those distributions are compact and can be efficiently approximated with a set of rather "static" models. I claim this assumption is wrong at the foundation, the semantic distributions, … Read more...
There is a never ending discussion, which very concisely can be summarized in this tweet below:
And frankly any time I see similar exchanges (and I see a lot of them) I get mildly irritated. Let me get to the essence.
Computers have been undoubtably the shaping invention of the recent century and hence they have became a strong theme in our culture. Since the theory on which computers have been built is a branch of mathematics, by definition an abstract discipline, computers have also had a major impact on philosophy. We learned for example that everything we can write an equation for can be in principle calculated on a computer. This leads to somewhat profound philosophical consequences summarized as follows:
- Stuff we can write equations for is in principle computable
- We can write equations for physical interactions of molecules
- Everything is made of molecules
- Hence everything is computable
- Hence in principle we could simulate an entire brain in a computer
- And since we can in principle simulate a Turing machine in a brain, hence brains and computers have to be equivalent
- Furthermore, in principle we could simulate entire Universe
- Hence universe must be a computer too
When … Read more...
The pandemic has largely overwhelmed the news cycle over the past year and hence influencing and largely deflating the AI hype train. There were a few developments though which I'd consider significant. Some of them very well predicted by articles in this blog, and some surprising. Let's jump right in.
Million Robotaxis in wonderland
Since it is 2021 after all, the most immediate AI flop is related to Tesla robotaxis or rather lack thereof. Elon Musk promised that Tesla would achieve L5 autonomy by the end of 2020 back in April 2019 when he needed to raise money [and reiterated in April 2020]. The famous autonomy day was pumping hype and showing limited demos available to some guests of the show. These demonstration rides were no different from the demo shown in 2016 (as it later turned out, recorded eventually after many failed attempts). In fact Elon Musk claimed in 2016 that self driving problem is essentially solved, here a quote from this interview :
This was later followed by various promises of autonomous coast to coast drive by the end of 2017, later pushed and eventually canceled altogether. To be fair, Musk wasn't the only silicon valley … Read more...
2020 is a very strange year and a dumpster fire in many respects. Everything is still holding together but it feels like the news we get are just progressively more absurd. Similarly is the case with AI where a slow motion train wreck is progressing eliminating more and more hyped up companies and researchers. There are still areas where money is pouring into the field, but it feels like it's a far cry of what it was during the peak hype a few years ago. I fully expect a shift from private to more public money, as the government are always late in the hype cycle. So there could still be more waste to come, though there is a feeling of decline in the air. Anyway let's jump into a few highlights of the recent months.
DeepMind Alpha Fold
DeepMind has been rather quiet and even popular press noticed a significant decrease in the level of hype, but recently they managed to show some progress on protein folding problem. This problem is of high practical importance in biology so at least it's good to see that the company uses their incredible resources on something that we all may eventually … Read more...
Since many of my posts were mostly critical and arguably somewhat cynical , , , at least over the last 2-3 years, I decided to switch gears a little and let my audience know I'm actually a very constructive, busy building stuff most of the time, while my ranting on the blog is mostly a side project to vent, since above everything I'm allergic to naive hype and nonsense.
Nevertheless I've worked in the so called AI/robotics/perception for at least ten years in industry now (and prior to that having done a Phd and a rather satisfying academic career), I've had a slightly different path than many working in the same area and hence have a slightly unusual point of view. For those who never bothered to read the bio, I was excited about connectionism way before it was cool, got slightly bored by it and got drawn into more bio-realistic/neuroscience based models, worked on that a few years and got disillusioned, then worked on robotics for a few years, got disillusioned by that, went on and got a DARPA grant to build the predictive vision model (which summarized all I learned about … Read more...
I've started contemplating this post in mid February 2020 while driving back from Phoenix to San Diego, a few miles after passing Yuma, while staring into the sunset over San Diego mountains on the horizon hundred miles ahead. Since then the world had changed. And by the time you'll read this post a week from now (early April 2020) world will have changed again. And by the summer of 2020 it will have changed several times over. I won't go here too much into the COVID-19 situation, since I'm not a biologist, my personal opinion though it that it is a real deal, it's a dangerous disease spreading like wildfire, something we have not really seen since the Spanish Flu of 1918. And since our supply chains are a lot more fragile, our lifestyles a lot more lavish and everybody is levered up, it has all the chances of causing an economic havoc unlike anything we've seen in the past two centuries. With that out of the way, let's move to AI, as certainly the economic downturn will have a huge impact there.
Let's start with the article in technology review that came out in February going deeper inside … Read more...
California DMV disengagements reports are out for 2019, and it is time to plot some data.
As usual, these number are not really measuring reliably the safety of AV's and there are plenty ways to game them, or overreport. Please refer to my last years post for a deeper discussion (and 2017 post here, 2018 post here) on why these numbers are essentially flawed. Nevertheless these are the only official numbers we get, the only glimpse of transparency into this giant corporate endeavor called the "self driving car".
First the disclaimer - this data came from
- California DMV disengagement reports for years 2019, 2018, 2017, 2016 and 2015
- Insurance Institute for Highway Safety fatality data.
- RAND driving to safety report.
- Bureau of Transportation Statistics
all which is easily verifiable. And so here comes the plot everyone is waiting for (click to enlarge):
And as usual a quick commentary:
First of all, the only players who really have a number anywhere in the vicinity of interesting are Waymo, Cruise and Baidu. I'll discuss Baidu later, since their sudden jump in performance seems a bit extraordinary. Nevertheless even Waymo and Cruise disengagements are still approximately … Read more...
Earlier last week I posted a poll on twitter asking If my readers would like me to post a GPT generated article. The votes were very evenly distributed:
The remainder of this article is generated using GPT-2 network (using this site) primed on bits of my other articles to covey some of the style. The images were generated by https://app.generative.photos/ from RosebudAI - a recent hot startup in the AI space. When done reading, please consider future historians analyzing the outburst of AI in 2010-2020 and decide if they'd be impressed or will they be like "WTF were they thinking back then!?".
The study was done in the summer of 2014, but there have been so many recent news stories about Uber (and similar companies) and the impact it has had on public safety, ”We're very happy” to add to the body of knowledge we've accumulated.
What can we learn about the state of public transportation?
Our findings indicate that if public transportation is to be made safe, “we have to build the systems on a much higher level”, and that this will require substantial change from the traditional public-sector perspective. We've discussed the problems in the above graphic:
In … Read more...
It's been 7 months since my last commentary on the field, and as it became regular appearance in this blog (and in fact many people apparently enjoy this form and keep asking for it), it is a time for another one. For those new to the blog, here we generally strip the AI news coverage out of fluff and try to get to the substance, often with a fair dose of sarcasm and cynicism. The more pompous and grandiose the PR statement, the more sarcasm and cynicism - just to provide some balance in nature. The field of AI never fails to deliver on pompous and grandiose fake news hence I predict there will be a material for this blog for many years to come. Now that the introductory stuff is behind and you've been warned, let us go straight to what happened in the field since May 2019.
Self driving cars
As time goes, more and more cracks are showing on the self driving car narrative. In June, one of the prominent startups in the competition - Drive.ai got acqui-hired by Apple, reportedly days before it would have ran out of cash. For those not … Read more...
Welcome back. First of all, apologies for not posting as frequently as I used to. As you might imagine, blogging is not my full time job and I'm currently extremely involved in a very exciting startup (something I'm going to write about soon). On weekends and evening I'm busy with 7mo infant to help care for and altogether that leaves me with very little time. But I'll try to make it better soon, since a lot is going on in the AI space and signs of cooling are visible now all over the place.
In this post I'd like to focus on the recent book by Gary Marcus and Ernest Davis, Rebooting AI. Let's jump in.
If you are a person who is not necessarily deeply involved in recent (recent 10 years or so) developments in AI and instead you've been building your image of the field based on flashy PR statements by various big companies (including Google, Facebook, Intel, IBM and numerous smaller players) - this is a book for you. The first part of the book goes thoroughly through various press releases and "revolutionary" products and tracks how these projects either spectacularly or quietly failed.
Reading the first … Read more...
This post is not about AI and not about winter. I have a few of those coming, but this one is about something different. I hope you don't mind.
A friend of mine recently gave a lot to think about by stating the following thought experiment:
Imagine you are taken back in time. To what extent would you be able to advance the civilization of the given era with all the knowledge in your head (no notebooks).
Initially the reaction is obviously that since we all live and breathe the current technical civilization, one should be able to recover almost everything right? There are some many uncertainties to which we already know the answers, so this should be much easier than to get there without such insight?
When you actually give some thought to it, you will realize that things may not be so easy. First of all, in most cases if somebody was taken back in time but left in the same place, they would end up in a middle of nowhere and would have to first survive to even get into contact with any contemporary humans. Say San Diego 300 years ago was an empty costal desert, and … Read more...
It's been roughly a year since I posted my viral "AI winter is well on its way" post and like I promised I'll periodically post an update on the general AI landscape. I posted one some 6 months ago and now is time for another one. And there has been a lot of stuff going on lately and none of it has changed my mind - the AI bubble is bursting. And as with every bubble bursting we are in a blowoff phase in which those who have the most to lose are pulling out the most outrageous confidence pumping pieces they could think of, the ultimate strategy to con some more naive people to give them money. But let's go over what has been going on.
The serious stuff
Firstly let's go over the non-comical stuff. Three of the founding fathers of deep learning - Geoffrey Hinton, Yoshua Bengio and Yann Lecun - received a Turing award - the most prestigious award given out in computer science. If you think that I will somehow question this judgement you will be disappointed, I think deep learning is well worth the Turing award. The one thing that in … Read more...