Fatalities Caused by Autonomous Vehicles

Likes: 0Rating: 0.0

QAC EmTech
QAC Emerging Technologies Quality Assurance

This monthly newsletter will focus on QAC’s activities regarding R&D, Connected Vehicles, Cognitive Autonomous Systems, Artificial Intelligence, Internet of Things, and Blockchain Quality Assurance Services.

FOCUSED ON THE FUTURE

Leading the way

 
Welcome to the seventeenth edition of the EmTech newsletter. In this edition, we’ll review levels of vehicle autonomy. This includes reviewing QA Consultants’ solutions regarding self-driving vehicles that have caused deaths or injuries.

Welcome to the 17th edition of our EmTech Quality Assurance Newsletter

Keeping you informed

 

Our emerging technologies quality assurance workstreams

R&D and Grant Projects

Research and development of new technologies that position QAC to become a world leader in quality assurance services.

Connected Vehicles

Testing and Quality Assurance services exclusively developed to provide integration testing services for highly connected vehicles.

Cognitive Autonomous Systems

Fully automation of testing and quality Assurance services exclusively developed for Cognitive Autonomous Systems

Cybersecurity, IoT, AI, and Blockchain

Focus on developing new technologies that utilizes AI to address QA challenges on Cybersecurity, IoT, and Blockchain domains

Autonomous Vehicle (AV)

The Society of Automotive Engineers (SAE’s) defines five levels of automation for a self-driving vehicle (known as an autonomous vehicle) which are adopted by the National Highway Traffic Safety Administration (NHTSA) are listed as follows:

  • Level 0: No automation, a human driver performs 100% of the driving.
  • Level 1: A human driver utilizes an Advanced Driver Assistance System (ADAS) for steering, braking,and accelerating.
  • Level 2: ADAS partially controls steering, braking, and accelerating while a human driver monitors the driving environment.
  • Level 3: Under some circumstances, all driving tasks are performed by the Automated Driving System (ADS). However, a human driver must be ready to take control when it is required.
  • Level 4: Under some circumstances, all driving tasks and monitoring the driving environment are performed by ADS and a human driver does not need to pay attention.
  • Level 5: Driving in all circumstances is performed 100% by ADS and human drivers are not involved in driving.

Self-Driving Car Crashes Resulted in Deaths or Injuries

In recent years, autonomous vehicle accidents have killed and injured several people. In most of these situations, a human driver was at the wheel. Thus, autonomous driving programs are currently under critical observation. In the following, we will review some of the self-driving car accidents.

In 2014, the Tesla Model S got a harsh review from consumer reports regarding reliability. According to reports, the EV stopped working after going 12,000 miles during the test run. The technician had to perform a hard reset to make the vehicle functional. It was unavoidable to solve many issues such as the navigation systems that went black. In fact, it is said that the test results were based on one vehicle only; however, they anticipated the reliability level to full [1].

In 2016, a tragic accident occurred in Williston, Florida. While driving, the car’s autopilot feature failed to brake because it did not register a white truck against a brightly lit sky. As a result, the car collided with the truck which resulted in the driver’s death [2].

In March 2018, a 2017 Tesla Model X SUV crashed on Highway 101 in Mountain View, California which resulted in a fatality. The driver had been accessing a video game on his phone just before the collision occurred [3]. The autopilot navigation system malfunctioned causing the vehicle to crash. Allegedly, the TeslaAutopilot turned the vehicle left causing a head-on collision with the concrete median. Eventually, Tesla was sued over the malfunctioning autopilot.

Recently, there was a fatal crash in Houston, Texas, which involved a 2019 Tesla Model S. The vehicle was traveling at high speeds when it failed to navigate a curve and went off the road. The vehicle crashed into a tree and burst into flames [4]. Though no one was in the driver’s seat, two male bodies were found in the vehicle. The owner of the car was sitting in the backseat and another man was found in the front passenger seat. The investigation is still ongoing.

In 2018, a woman was struck and killed by a self-driving Uber vehicle in Tempe, Arizona. She was detected by the autonomous vehicle’s sensors, however, due to the improper software setting, the car failed to navigate away from the victim. The software detected a false positive instead of a pedestrian. As a result, the vehicle didn’t react fast enough and collided with the pedestrian causing a fatality. According to the National Transportation Safety Board (NTSB), the vehicle’s automatic systems failed to identify a bicycle as an imminent collision danger. The board proposed a list of items contributing to the fatal crash [5]. One of the noted issues was an inadequate safety risk assessment procedure in Uber’s self-driving vehicle. In other words, the lack of a safety division within Uber’s Advanced Technologies Group (ATG) was the main cause of the failure.

In 2018, another self-driving car was involved in an accident that sent two people to the hospital. Four occupants were T-boned, in a modified Ford Fusion, by a van that ran a red light in Pittsburgh, Pennsylvania. This vehicle belonged to Argo AI, the self-driving startup backed by the Ford Motor Company. Although the driver of the box truck was cited for running a red light, Argo immediately grounded its fleet before allowing their vehicles back on the road [6].

In California, December 2017, a motorcyclist was involved in a crash with a self-driving Chevy Bolt. The Chevy Bolt was equipped with an autonomous driving system called Cruise. While on autopilot, the self-driving vehicle made an abrupt lane change and injured a motorcyclist. The Honda motorcyclist was traveling at a higher speed than the car and had moved into the vehicle’s way and had fallen over. At the time of the accident, the self-driving occupant did not have their hands on the steering wheel [7].

Evaluating an autonomous vehicle’s ability to respond to various system faults isa difficult task. Whether errors occur in the blink of an eye or once every ten thousand miles, the challenge lies in how to handle such faults to avoid a system failure at a higher vehicle level and eventually, the corresponding potential hazards. Despite defects, it is not always easy to decide who is at fault. In most cases, the autopilot malfunctioned and caused the accidents. Therefore, adequate safety risk assessment procedures should be in place to avoid such malfunctioning in the vehicles.

QAC has different solutions for the quality assurance of autonomous vehicles. We have developed a model-based System Integration Testing (SIT) solution that validates software systems against the functional requirements or specifications by leveraging ISO 26262 standards. QA Consultants’ xCog solution enables automated testing of cognitive systems, in particular, the Cognitive Advanced Driver Assistance Systems (ADAS) inside the vehicle such as the autopilot functionality. Currently, QA Consultants is developing a solution to identify the vulnerability of autonomous vehicles against various cybersecurity attack vectors. This includes having a mechanism to detect cyber-attacks, implement Penetration Testing and Fault Injection to identify vulnerabilities in the autonomous vehicle.

 

 

Quality Assurance for AI, Robotics, and Autonomy

xCog

Accelerating automated testing for cognitive systems

STAY TUNED

Coming next month

To learn more please visit our EmTech page at https://qacstaging.wpengine.com/solutions-and-services/emerging-technologies. More topics to come soon! Stay tuned to our next newsletter.

Our partners:

 

Recent thought leadership

[qac-carousel id=”20158″]