A year ago I wrote a post summarizing the disengagement data that the state of California requires from the companies developing Autonomous Vehicles. The thesis of my post back then was that the achieved disengagement rates were not yet comparable to human safety levels. It is 2018 now and new data has been released to it is perhaps a good time to revisit my claims.
Let me first show the data:
And in a separate plot for better readability just Waymo, the unquestionable leader of that race (so far at least):
So where did that data came from? There are several sources:
- California DMV disengagement reports for years 2017, 2016 and 2015
- Insurance Institute for Highway Safety fatality data.
- RAND driving to safety report.
- Bureau of Transportation Statistics
One can easily verify the numbers plotted above with all of these sources. Now before we start any discussion let's recall what California defines as a qualifying event:
“a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” Section 227.46 of Article 3.7 (Autonomous Vehicles) of Title 13, Division 1, Chapter 1, California Code of Regulations.
Before I comment let me start with acknowledging that disengagements are certainly not the same as crashes. There is some yet unknown relationship between these two numbers and certainly not all disengagements would inevitably have lead to a crash. On the other hand it would be naive to think that none (or only a tiny fraction) of these events would have lead to a crash. When the software fails and e.g. the control system of the vehicle hangs, it is more than likely that the end result of such situation would not be good (anyone working with robots knows how rapidly things escalate when something goes wrong - robots don't have the natural ability to recover from a deteriorating situation). If that happened on a freeway at high speed, it would easily have lead to a serious crash with either another car or a barrier. If it happened in a dense urban area at small speed it could lead to injuring pedestrians. Either way, note that Waymo only reports the events that fulfill the California definition, i.e. these are actual failures or events threatening traffic safety as concluded by their extensive simulations of each event. So the data above does not include benign events, such as taking control because the car is stuck for too long behind a garbage truck (other companies may report all events, hence such a discrepancy between Waymo and pretty much everyone else). So in summary, although the data above is not perfect, it is the only solid data we have.
Now that we have these prerequisites behind us, let me share a few of my own opinions on the subject:
First the progress appears to be plateauing. It is not a very strong trend based on one or two data points, but the progress Waymo and others have made over the year (perhaps excluding Cruise) is not overwhelming. In my personal opinion, the current level of disengagements roughly reflects the rate of tail events - some more or less unusual conditions that either require some common sense or better anticipation of the behavior of other traffic. Behavior in these conditions can no longer be solved by improved sensors and requires better AI.
Secondly, it is hard to support the claim that AV's are much safer than humans. Now as I've mentioned, the relationship between crashes and disengagements is not clear, but there is roughly 2 orders of magnitude gap - this is a HUGE gap! Even if we assume that 1 in 10 disengagements as concluded by Waymo to fulfill California DMV definition would have caused a crash (which I think is very optimistic), there is still 10x factor. The human data averages over all vehicles (average car in the US is 11.2 years old, things look better for newer vehicles with more safety features), all weather conditions (good fraction of accidents happen in bad weather), includes fair number of DUI cases. If we were to exclude some of these factors, human data becomes much better. E.g. if we exclude DUI, the fatality rate drops by at least half. If I were to claim that an AV is safer than a human, I'd probably be more comfortable if it was actually safer than a sober human not a drunk person (I don't drive with people whom I'd consider drunk).
Of course the moment I'll post this, I'll hear somebody (mostly people who have no idea what they are talking about) claiming that 98% of accidents are caused by a human - yes and that is because 99.9999999999% of cars are driven exclusively by humans! Next thing I'll hear is that crash rates among say Waymo experimental fleet is lower than human average - no wonder, these are new vehicles, equipped with a ton of safety tech, driven conservatively by attentive professionals, extremely well serviced and ran in generally good weather conditions on well known routes. Statistics can be quite tricky and arguably a lot depends on interpretation, but it is fairly clear that if somebody deploys AV's before they are sufficiently safe, it will become very clear to everybody very quickly (after a handful of horrible crashes, such as that of Joshua Brown).
Will the future bring AV's safer than humans that everyone loves to talk about? Probably yes, but looking at the current data, it will likely take years and require several changes in approach and groundbreaking discoveries (particularly in the space of AI). I think the disengagement line has to fall well below human crash rate, before the safety claim can be seriously made. And when that happens, I will gladly acknowledge it. But not yet.
If you found an error, highlight it and press Shift + Enter or click here to inform us.