It's time for another post in the Tesla FSD series, which is a part of a general self driving car debacle discussed in this blog since 2016 [1,2,3,4,5,6,7]. In summary, the thesis of this blog is that AI hasn't reached the necessary understanding of physical reality to become truly autonomous and hence the contemporary AI contraptions cannot be trusted with important decisions such as those risking human life in cars. In various posts I go into detail of why I think that is the case [1,2,3,4,5,6] and in others I propose some approaches to get out if this pickle [1,2,3]. In short my claim is that our current AI approach is at the core statistical and effectively "short tailed" in nature, i.e. the core assumption of our models is that there exist distributions representing certain semantical categories of the world and that those distributions are compact and can be efficiently approximated with a set of rather "static" models. I claim this assumption is wrong at the foundation, the semantic distributions, … Read more...