16.10.2025
Establishing Liability for Accidents Caused by Autonomous Cars
As a result of the proliferation of autonomous cars, their involvement in traffic has inevitably led, all over the world, to an increase in the number of accidents in which these cars are involved, obviously causing various damages, from material damage to the loss of human lives.
Since it is obvious that a unitary approach at the level of liability for AI activity (by which we mean any form of artificial intelligence) has no chance, it is much more realistic for these issues to be addressed on a case-by-case basis. Thus, liability for prejudice caused by autonomous cars can be legislated separately from other forms of AI liability, both in our country and in other countries.
First of all, we should mention that, technically, there is a classification of cars, depending on their level of automation in terms of driving: from Level 0 (no automation) to Level 5 (vehicles can drive safely in all conditions, without human involvement).
Tesla's autopilot and fully autonomous driving capabilities are considered to be included in Level 2 (partial automation) - the car can perform both steering and acceleration/braking functions. However, Tesla warns its users that these capabilities are "intended for use by a fully attentive driver, who has his hands on the wheel and is ready to take control at any time" and that these features "do not make the vehicle autonomous." Despite these warnings, which seemed to shield Tesla from liability for accidents involving Autopilot, a jury in the state of Florida (USA) recently found Tesla partially liable for a fatal accident involving one of its cars operating in Autopilot mode, and ordered it to pay damages worth $243 million.
But the key liability issues arise once cars reach Levels 4 and 5, where driving is handled entirely by the car and there may be no human driver in the vehicle. Waymo’s self-driving taxis are an example of a Level 4 autonomous vehicle. The same level is reached by the loading and unloading trucks used in ports, where the degree of automation is extremely high, especially in Asia. In the specific conditions in which they are designed to operate safely, Waymo’s self-driving cars, for example, operate completely autonomously, with no human driver in the car. Tesla’s new robo-taxis, which the company launched in Austin, Texas, in late June, operate similarly, in fully autonomous mode. Absent a negligent, reckless, or malicious decision by a passenger to use Tesla’s self-driving button, the company would be liable if an accident resulted from the car’s poor performance.
At least three approaches to this issue have been proposed in the doctrine so far, which competent decision-makers seeking to legislate could consider.
The first is the traditional product liability approach based on a negligence standard, in which the plaintiff must prove that there was a design or manufacturing defect in the autonomous car, and that defect led to the accident that caused the personal injury or property damage. In the absence of such proof, victims would not be compensated.
The second approach is to establish strict liability, under which autonomous car manufacturers would be liable for any damage caused by their cars, regardless of whether the car was defective or not. Victims would be compensated without having to prove that a design or manufacturing defect caused the accident.
The third approach would go beyond the boundaries of product liability law. It relies on a new legal construct of an "AI driver” and asks: under what conditions should a robot driver be held liable for accidents it causes? Under the "reasonable human driver” standard, the car manufacturer could be held liable for damages whenever the robot driver fails to avoid an accident that a reasonable (i.e. competent, unimpaired, and careful) human driver would have avoided. Victims would only have to prove that the robot driver had behaved in a way that would have been unreasonable if it had been adopted by a human driver, and would be compensated if they could prove this.
We tend to believe that the latter option would be more appropriate for ensuring a fair balance of the rights of the parties involved in a dispute arising from an accident caused by an autonomous car.
An article signed by Dr. Victor Dobozi (vdobozi@stoica-asociatii.ro), Senior Partner, STOICA & ASOCIAȚII.
