Whenever a discussion turns to the topic of autonomous cars in Germany, the comment that “the question of liability must first be clarified” inevitably comes up. And this is said with such certainty, as if the development of autonomous driving were not primarily a challenge in terms of technology, but rather the legal framework were the equivalent of rocket science.
What is often overlooked is that this question has actually been settled for a long time—and by that I mean decades. The final report of the German Ethics Commission on autonomous driving already referred to the applicable principles of product liability in 2017 under Guideline 11.
Liability for damage caused by activated automated driving systems is subject to the same principles as other product liability.
In other words:
- If damage is caused, for example, by an algorithm error because the autonomous driving system misjudged a person, object, or driving situation and the system should have handled it correctly under the conditions, then the manufacturer is liable. Just as the manufacturer is liable for a faulty brake or airbag deployment.
- If damage is caused, for example, by a poorly maintained, dirty, or poorly calibrated sensor, then the operator of the autonomous car is liable.
- If damage is caused, for example, by unauthorized intervention by a passenger, then the passenger is liable.
In fact, around 95% of all accidents involving autonomous cars are caused by human counterparts. The California Department of Motor Vehicles (DMV) requires all operators of autonomous test vehicles and robot taxis to immediately report all incidents to the authority using a specific form and to make them publicly available.
In the past—and by that I mean hundreds of years ago—there were already laws that addressed the question of who is liable for damage caused by other “autonomous systems,” in this case “slaves.” Here is an excerpt from my book The Last Driver’s License Holder… published in 2019:
In fact, we can draw on approaches that have already been described, even if they seem repulsive and completely out of place at first glance. Jerry Kaplan, author of the book Humans Need Not Apply, digs up the (thankfully abolished) slave laws that dealt with similar issues before the American Civil War. Slaves were property and had owners. The provisions governing who was liable for damage caused by a slave or who should be punished were laid down in the “Slave Codes” (alongside many other regulations, most of which were directed against slaves). The owners were only held liable in certain cases; in many others, the slaves were punished. However, the determination of guilt was less concerned with law and justice than with the well-being of the slave owner: would punishing the slave cause too much harm to the owner?
And just to add to that: even in the 17th and 18th centuries, slave codes were not as uncontroversial as they may seem today. But how do you punish robots and companies in the event of misconduct or damage caused by them if you are not dealing with an “individual”? Do you punish only those responsible, do you punish those who carried out the act or those who gave the orders, or do you punish the entire company? Do you take into account the motive, the intention, and the impact on society?
Of course, you can’t put a robot in prison. But there are approaches that allow us to achieve an equivalent. Both robots and companies serve a purpose. Their entire existence is geared toward fulfilling this purpose. If they are fined or have their business licenses revoked, they cannot fulfill this purpose for a certain period of time. A judge can also order the closure of a company. All these measures deprive the company of the basis for continuing its business operations. This can be tantamount to a death sentence. One example is the Deepwater Horizon accident in the Gulf of Mexico in 2010, after which the authorities forced BP to pay for the costly clean-up and imposed fines running into billions.
A self-driving vehicle is designed to transport us and our goods. If a penalty is imposed, it can no longer fulfill the task for which it was created. Jerry Kaplan argues that operators of fleets of self-driving cars could be forced to register each car as a separate company rather than forming one company for all cars. This would prevent any resulting lawsuits in the event of damage from taking the entire fleet out of service, for example that of a taxi company, but only the one vehicle and the associated company.
Incidentally, the same question arises with regard to a new topic that is currently becoming relevant: humanoid robots. And this question needs to be addressed in the same way as it is with autonomous cars. In my book Homo Syntheticus: How Humans and Machines Are Merging, which will be published in spring 2026, I will of course also discuss this question.
In other words: The question of liability has long been clarified and is adequately covered by existing legal frameworks.
This article was also published in German.

1 Comment