Waymo’s apparent quantum leap : SelfDrivingCars

0
27


Is anyone else bothered by the fact that Waymo reported a disengagement every 5.6k miles in CA in 2017? Best case, just considering the last few months, they had one every 8k miles. Going 8k miles between human interventions seems great, but if we assume even just 20% of those would have ended in an accident without a driver, that’s still, what? Three times worse than human level performance?

You could say that they have had a runaway improvement since then. But consider that in 2016 they reported a disengagement every 5k miles. So in the span of one year, they went from 5k to 5.6k. Did they really, in the following year, then go from 5.6k to over 100k?

Or you can say that they had a lot of mundane disengagements that wouldn’t have mattered. But by their own admission, they filter them based on simulation of what “would” have happened.

So how do they go from that level of performance to announcing completely driverless services in Pheonix not even a year later? Someone please help me to see the missing data or explain the disconnect. Is CA just that much harder than Pheonix? Are they ok with sub-human level performance?

EDIT: The 2015 report (which I was much less familiar with) actually lays out some of this data in a much better manner. Although I still wonder why/how they differentiate between “safety significant” and “simulated contact”. Sounds potentially like word games to me. What is the “safety significance” if the car doesn’t hit anything? Are they considering hardware failure, for example, as safety significant, but not a potential crash? Something still seems off, but it’s explanation enough to satisfy my immediate curiosity for now. Thanks to all for responding and pointing me toward it!



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here