In the week since the fatal accident that has shocked the nascent autonomous vehicle industry, increasing scrutiny has grown on the performance of Uber’s vehicles, regulatory issues and the risks associated with public trials.
The incident, which made international headlines has served as a lightning rod for debates around safety and culpability for the technology and its testing regimes, with commentators analysing everything from the design of the accident site to the underlying assumptions of a human supervising an autonomous system as a safeguard.
Tempe police released footage from the Uber Volvo XC90’s interior and exterior cameras on March 22. Some viewers may find the footage distressing, discretion is advised.
The disturbing video shows the victim of the accident, 49-year-old Elaine Herzberg, hurrying to cross the darkened lane while the vehicle maintains its course and speed, while an internal video of the safety driver behind the wheel shows him glancing up at the road occasionally from his lap, and capturing his reaction as he anticipates the imminent collision.
In a statement to the San Francisco Chronicle, Tempe police chief Sylvia Moir said: “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how [the pedestrian] came from the shadows right into the roadway.”
Industry experts soon chimed in to point out that the police statement was a case of human standards being applied erroneously — the video demonstrates a failure of Uber’s systems. Uber’s autonomous vehicles are equipped with LiDAR sensors, which should have been unaffected by darkness, and radar — which should have been able to detect the metal frame of Elaine’s bicycle as it moved into a collision course with the vehicle.
Velodyne, manufacturer of the LiDAR sensors installed on the vehicle, told the BBC: “Our Lidar can see perfectly well in the dark, as well as it sees in daylight, producing millions of points of information. However, it is up to the rest of the system to interpret and use the data to make decisions. We do not know how the Uber system of decision-making works.”
Representatives of the National Transport Safety Board were seen in Tempe running braking and performance tests on the actual SUV involved in the crash, along with Elaine’s bicycle on March 23.
The #Uber vehicle involved in Sunday’s fatal pedestrian accident was being testing Thursday night in #Tempe. #azfamily learned investigators from #TempePD, #NTSB, #NHTSA were present. Car did 5 test runs. The bike that Elaine Herzberg was also present. pic.twitter.com/QdfLjZFGQ2
— Juan Magaña (@PhxJuanTVNews) March 23, 2018
Reporting of the issue began to focus on Uber’s patchy safety record throughout its AV trials, which includes a raft of previous incidents their AVs were involved in, and a vastly higher number of human interventions required than their competitors, or than their engineers predicted.
Uber’s self driving vehicles in Arizona have required human intervention on an average of every 13 miles (20.9 kilometres), which is significant when compared to the 5,600 miles (9,012 kilometres) between interventions required by Waymo’s AVs in Californian trials.
Waymo, Alphabet’s self-driving car subsidiary that settled a trade secrets dispute with Uber in February, told Forbes that their vehicles would have avoided the crash.
“What happened in Arizona obviously was a tragedy. It was terrible. You couldn’t watch that video and not be impacted by it,” Waymo CEO John Krafcik said.
“We know that for a lot of different reasons. It’s what we have designed this system to do in situations just like that.”