The Limits of Robot Laws


Asimov’s Robot Laws are over 80 years old. Are they enough to protect us all in the age of AI?

We truly live in an exciting era. Artificial Intelligence (AI) is spreading more and more, and my team and I are working on the next big thing: cognitive robots.

Asimov's Robot Laws

The fusion of artificial intelligence with a robotic body will support us humans in many areas of life where it is still unimaginable today.

Science, politics, media, and AI developers are currently debating whether and what kind of legal restrictions we need to make the use of AI in general – and in robots in particular, safe. At Neura Robotics, where we are developing our own AI to control our robots, this is a topic that naturally occupies my thoughts a great deal.

Asimov’s Robot Laws

More than eighty years ago, visionary science fiction author Isaac Asimov established three fundamental laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

At first glance, these rules provide a solid foundation for the ethical use of robots. However, upon closer inspection, it becomes apparent that they are insufficient in a world where AI systems are expected to make complex decisions and even act autonomously, to address real ethical and moral challenges.

The Limits of the 1st Law

Asimov’s first law prioritizes human life. Absolutely sensible! Nobody wants robots to harm us or stand by idly when we are in danger.
However, the definition of “harm” becomes a complex and ambiguous concept in a world where AI is used in areas like autonomous driving, medical diagnosis, and, unfortunately, even warfare.
What about an autonomous vehicle that has to decide whether to harm its passenger or a larger group of pedestrians in an accident?
What about non-human creatures? A household robot equipped with AI probably shouldn’t be able to harm our pets, but should it stop its owner from swatting a fly? Or behave differently in one household compared to another?
The phrase “through inaction” seems particularly problematic to me. Asimov surely imagined a robot intervening when someone nearby is in danger. Great. But unfortunately, there are always many people in danger worldwide. If an AI-driven robot takes the First Law literally, it would have to flit around like a superhero, saving people in distress and kittens from trees. But we would never see our robot again…

The Limits of the 2nd Law

The second law, which obligates robots to obey human commands, seems particularly problematic to me: The idea that AI systems must blindly follow human instructions, in my opinion, carries significant risks.
Here we might get to the heart of the challenge of formulating universally applicable laws for robots: What if the values on which the laws were formulated are themselves part of the problem?
Given the many different and sometimes contradictory ethical constructs humanity has established in the past, can we really assume that our current Western morality is absolutely right for all time? I myself am a devout Christian, but of course, I recognize that slavery and stoning are absolutely terrible errors of our ancestors – even though they are approved in some parts of the Bible.
In far too many cultures, violence against people is still considered a legitimate expression of morality: honor killings, executions, wars. I don’t want robots involved in such things!
On the other hand, nobody wants robots not to do what we tell them. If we teach our robots to protect animals of a certain size, they might eventually refuse to carry the shopping bag with dumplings home…
And ultimately, we are all realistic enough to know that AI systems can be flawed or manipulated by malicious actors. This makes the unconditional following of human instructions quite problematic.

The Limits of the 3rd Law

The third law, which allows for the robot’s self-preservation as long as it does not conflict with the first or second law, also leads to complications. If a robot lacks sufficient information about the consequences of its actions, adhering to the third law can quickly lead to morally questionable outcomes.

Conclusion: We Need a Continuously Updated Ethical Framework

Asimov’s laws are a great starting point. But given all these challenges, it’s clear to me that we need a clearly defined framework for our robots (and strictly speaking, for those who program their AI). However, instead of rigid rules, we should develop systems capable of understanding and applying ethical principles to different situations.

1. Continuous learning processes: Robots and AI systems must be able to learn from experiences and continuously improve their understanding of ethics. Through ongoing training and adaptation to new information, they can better respond to unforeseen situations.

2. Transparent decision-making: It’s important that the decision-making processes of robots and AI systems are transparent so that people can understand how they reach certain conclusions. This allows for better review and control of these systems’ actions.

3. Inclusion of ethics experts: Ethics experts need to be involved in the development process of robots and AI systems to ensure that ethical considerations are taken into account from the start. Through collaboration between technology and ethics experts, we´ll create systems that meet our society’s ethical requirements.

Asimov’s robot laws may work in the world of science fiction, but in the real world with its complexity and dynamics, they have their limits. To create ethically responsible robots and AI systems, we need a more flexible and context-sensitive ethical framework. Only then can we ensure that these technologies are used for the benefit of humanity.


February 16th 2024

The Limits of Robot Laws






See all Stories