Does Tesla’s Autopilot Save Lives or Risk Them?

Tesla’s autopilot won’t ever drink and drive, which makes it objectively better than humans in some respects. But is it actually safe?

The EV manufacturer claims that their autopilot technology is saving lives, and there have been some reported incidents that validate their claims. However, at the same time, there have also been a variety of different kinds of autopilot road accidents, some of which were caused by errors with autopilot systems.

This obviously puts a question mark over the reliability of Tesla’s autopilot system, and autopilot systems in general. Further complicating the matter are the bold claims being made by Tesla CEO Elon Musk. Over the years, he has recurrently touted the abilities of the company’s autopilot system, making it seem like the system is more advanced than is really is. For example, on a 2016 press call, he said that Tesla vehicles would be able to pilot themselves coast-to-coast by the end of 2017. “We’ll be able to do a demonstration guide of full autonomy all the way from LA to New York. So basically from home in LA to Times Square in New York. And then have the car go and park itself by the end of next year,” he said. 

This obviously didn’t happen. So just how advanced, and how safe, is Tesla’s autopilot?

What’s the current status of Tesla’s autopilot?

Source: Vlad Tchampalov/Unsplash

Tesla introduced the autopilot feature for thess first time in its Model S cars in the year 2014, and since then the company has been developing and improving the system to come up with better versions of their autopilot technology.

The Society of Automotive Engineers (SAE) has placed Tesla’s autopilot at level 2. At the time of writing, no car above SAE level 2 has been sold in the US, as they do not meet the requirements to be called “completely autonomous vehicles.” 

In this respect, Tesla’s “autopilot” is actually an advanced driver-assisted system (ADAS), which means that it is partially autonomous, so even when the car is running in self-driving mode, the human driver is required to make regular contact with the steering wheel and other controls. In the event that the system senses the prolonged absence of the driver, it automatically stops the car. 

Tesla’s autopilot system isn’t just one component or device. Rather, it is a blend of high-tech hardware, AI technology, and advanced computer software. The system also receives regular updates.

Current Tesla models come equipped with twelve ultrasonic sensors and forward radar to read lane lines and detect nearby cars. Additionally, there are eight surround cameras that offer 360-degree vision in a 273-yard (250 meters) area around the car. Tesla is dropping the forward radar sensor in some future models, and beginning the rollout of what the company calls “Tesla Vision.” This will make Tesla’s driver-assist systems almost completely reliant on the car’s cameras.

And with such advancements, the company says they are confident that it will be able to eventually get Tesla to Level 5 autonomy.

According to Tesla, the onboard full-self-driving computers in its modern electric cars run on a neural net, which has been exclusively developed by the company for its autopilot development and training purposes.

They claim that these computers have a data processing speed that is 40 times higher than previously developed systems. This indicates that the autopilot is well-equipped to make quick on-road decisions to avoid any unfortunate incidents. However, to gain complete access to all these features, consumers need to add on an additional $10,000 Full Self-Driving (FSD) capability package to the standard Tesla vehicle. 

According to the World Health Organization, approximately 1.3 million deaths are caused every year due to road accidents. Tesla admits that their autopilot system can not prevent all such devastating accidents, but they believe that if the current safety level of Tesla vehicles is applied on a grand scale, then up to 900,000 lives could be saved each year.

Tesla didn’t respond to a request for comment, however, their website mentions that, when compared to a human driver, the autopilot can perceive the world around in a much more detailed manner. For example, the company notes that when humans drive a car, they can’t see in all directions at once. Meanwhile, their autopilot is not only able to focus on every direction while driving, and can also sense wavelengths and frequencies that are impossible for humans to detect.   

It must be noted that Tesla’s website also states that, “Tesla cars come standard with advanced hardware capable of providing Autopilot features, and full self-driving capabilities.” This does make it sound a little like the vehicles already have full self-driving capabilities and could be confusing to some consumers. It is rather easy to misunderstand the “capable of providing” as meaning “do provide.” 

Tesla’s latest vehicle safety report suggests that in the first quarter of 2021, only one accident was reported for every 4.19 million miles covered by the Tesla autopilot. 

Elon Musk on Tesla’s autopilot system

Settling the Debate: Does Tesla's Autopilot Save Lives or Risk Them?
Source: NVIDIA Corporation/flickr

Tesla CEO Elon Musk stands strongly by the autopilot system developed by his company. In many of his interviews and events, he repeatedly mentions the fact that millions of people die every year in road accidents and noted that autopilot can significantly reduce these numbers. 

He even claimed once that, when compared to manually driven cars, Tesla’s autopilot-enabled vehicles can reduce the risk of a crash by 10 times. However, many experts considered this fact misleading and argued that Elon Musk often overstates the safety potential of Tesla’s self-driving cars. 

When asked the question, “Can Tesla be held responsible if any of its autopilot cars get crashed?” Musk replied, “Do Otis take responsibility for all elevators around the world? No, they don’t.” He further added that the company is responsible only in the event that their design or autopilot software is found faulty.   

In March 2018, 38-year-old software engineer Wei Huang died in a car crash while he was driving a Tesla Model X. Reports revealed that during the accident, the car was in autopilot mode. Huang’s family sued Tesla and this unfortunate incident grabbed a lot of media attention, they alleged that Musk’s company is not providing enough safety features in their autopilot cars and that they are actually testing their autopilots using their consumers as live drivers. 

In Tesla’s defense, Musk responded by saying that Model X (the car model driven by Huang) does not run on a fully self-driving system. Instead, the autopilot in that model is a “hands-on” system, which means that a driver is not supposed to remove their hands from the steering wheel when autopilot is on, and the event they do so, they receive a warning from the system. 

When the final decision from the National Transportation Safety Board (NTSB) regarding the case came out, the board held Tesla’s autopilot system, and the driver (who the investigation showed was possibly playing a video game on his cellphone before the accident) collectively responsible for the car crash. The NTSB also condemned government regulators for not implementing their safety recommendations for self-driving vehicles. 

Elon Musk was also furious with the intense media coverage of this incident. In an interview, Musk said, “They should be writing a story about how autonomous cars are really safe but that’s not a story that people want to click on. They write inflammatory headlines that are fundamentally misleading to readers. It’s really outrageous.” 

Even in most of his recent interviews, Musk generally disputes or dismisses the likelihood of a fatal car crash caused by errors with the autopilot. He firmly believes that self-driving cars are meant for (and do, in fact, provide) a safer traveling experience. However, he has stated previously that, like everything else in the real world, the autopilot will also never be perfect. 

Recently, Tesla introduced its latest self-driving software, FSD Beta 9.2, but Elon Musk does not seem impressed with it. In one of his tweets, he said that FSD Beta 9.2 is not great and his AI team is working to improve the software further. While responding to a user on Twitter, he also mentioned that he is aiming for a self-driving system that could turn out to be 1000% safer than a normal human driver. 

Who is to really blame when a self-driving car crashes?

Settling the Debate: Does Tesla's Autopilot Save Lives or Risk Them?
Source: Pixabay/pexels

Automobile engineers and industry experts believe that autonomous vehicle systems such as Tesla’s autopilot technology have a great potential to turn driving into a safer and more convenient experience for humans but they can also make errors. A really haunting question is when they do make mistakes, who’s going to take the blame, legally?

A research paper published in 2019 by Cornell University uses computer science and game theory to attempt an answer to this question. This study suggests that human drivers are likely to act in a more carefree manner in a self-driving car because they tend to believe that driverless car technology will always keep them safe on road. The researchers recommended that policymakers should design a set of optimal liability rules that could regulate drivers, car manufacturers, transport authorities, and also the self-driving car system.

In 2021, another study published in the academic journal Risk Analysis reveals that in case of a crash involving a self-driving vehicle, the general public is more likely to put blame on the car company and the autonomous driving system of the car than on the driver. Therefore, the authors of this research work suggest that while developing policies for autonomous vehicles (AVs), regulators should also consider drafting rules that suggest compensation for accident victims and their families. 

The two studies illustrate the complexities involved in establishing legal liability in the event of an AV accident. With the current generation of mostly semi-autonomous self-driving cars, one can put all the blame neither on car manufacturers, nor on car owners. Therefore, it is going to be a tricky task for state regulatory bodies to come up with a seemingly fair and widely accepted policy for legal liability for autonomous vehicles.   

So is Tesla’s autopilot safe or not?

Settling the Debate: Does Tesla's Autopilot Save Lives or Risk Them?
Source: Bram Van Oost/Unsplash

Tesla’s autopilot is an intelligent AI-based vehicle technology, it is continuously evolving and going through various stages of development that include both software and hardware upgrades. The autopilot is also assisted by an advanced autonomous braking system. Tesla is one of several vehicle manufacturers that offer this safety feature, although Tesla offers it in all of its cars.

From radar to 360-degree cameras and advanced sensor-based coverage, there are numerous features that make these vehicles some of the best self-driving cars in the world. If we consider the autopilot safety features, design, built quality, and computer intelligence that Tesla offers in its cars, then Tesla vehicles may well minimize the on-road risks associated with driving, although no vehicle can be completely safe, autonomous or not. 

However, an important point that is often highlighted by automobile experts is that the autopilot in Tesla’s self-driving cars is not an entirely independent system. All of their models require active human supervision and if a user ignores this fact he is definitely at risk.

https://www.youtube.com/watch?v=SSwKE7BtZvE

Previously reported autopilot car crash incidents also explain that this is still a technology under development, so there is always the possibility that computer-based self-driving systems will make a mistake in analyzing or responding to traffic or road conditions.

Recently, the National Highway Traffic Safety Administration (NHTSA) has also opened a formal investigation to examine Tesla’s driver-assisted autopilot system. The investigation will take into account all Tesla autopilot cars that were sold from 2014 to 2021.

If we have to draw a conclusion now, then undoubtedly Tesla’s autopilot is a state-of-the-art technology made to enhance human safety but at the same time, and it seems that it actually does. However, it must be handled with care and caution and used appropriately, as is true of all technologies.   

Leave a Comment