Forgiving a Robot

Elad Simchayoff

Imagine, God forbid, that a driver makes a horrible mistake and kills someone you love. The driver would be arrested. There would be a trial. There would be punishment. At some point, you will face a hard question, do you forgive the driver and try to move on?

Now think of the same scenario only instead of a human driver, there’s a self-driving car. Should it be punished? Would you forgive it? Is punishment or forgiveness even possible or necessary when it comes to automated machines?

Instinctively you might say no, and although a killer self-driving car is an extreme example, these questions are more relevant than ever as service-giving robots become a bigger part of our lives. Figures show that the service robotics global market, for example, was valued at $11 billion in 2018, and by 2024 is expected to reach $50 billion.

It’s a conversation that humans need to have.


Is a robot that commits a felony a criminal or a defective machine? To answer this question we first need to ask ourselves whether a robot could even be held responsible for its actions.

It’s not uncommon to lay responsibility on entities that are not human. We have corporate law, we can sue companies, we can punish them. And so, the question of legally turning a robot into a criminal might actually depend on the robot.

In 2017, the European Commission of Legal Affairs compiled a thorough report about robots and specifically about the question of liability. The commission concluded that when a robot is autonomous beyond a certain level then it should be held accountable for its actions. The Commission distinguishes between ‘a tool’, meaning a robot that performs action programmed by another, and an autonomous being, meaning a robot that can use machine learning and AI to make decisions and implement them.

In the case of the latter, the commission concludes, the robot is more responsible for its actions than its creator, and thus could be liable for harming people or property.

It’s challenging to think of a liable robot because we usually point the blame to one of two sources: The manufacture or the user. But as robots become more sophisticated and intelligent, this doesn’t longer have to be the case.

The study ‘Robot Criminals’ published in The University of Michigan Journal of Law Reform in 2019 tackled this issue. It concluded that there is a possibility for “no human to be sufficiently at fault in causing a robot to commit a particular morally wrongful action”.

The author made the distinction between a robot and a ‘smart robot’; Those are some current but mainly future machines that uphold 3 conditions:

  1. Are equipped with algorithms that can make nontrivial morally relevant decisions.
  2. Are capable of communicating their moral decisions to humans.
  3. Are permitted to act on their environment without immediate human supervision.

In other words, ‘smart robots’ are capable of making a decision, acting on it, and explaining it after the fact.

This might sound too ‘Sci-Fi’ at the moment but with the progress of AI and machine learning, a machine having to make what us human call ‘moral choices’ is definitely imaginable.

In the case of ‘smart robots,’ the study claims, the liability lays with the robot and it is the one to pay the price.

Why should we even punish a robot? In humans the benefits are clear. Punishment is not only a tool to distance a criminal from the community but also aims to rehabilitate those who broke the rules.

The rationale for punishing robots could be different, the author of ‘Robot Criminals’ offers 3 reasons to do so:

  1. Censuring wrongful action — This is more of a symbolic or educational reason. This is the way for our society to highlighting which values it holds so important that they couldn’t be broken without punishment.
  2. Alleviating emotional harm to victims — Studies have shown that people could fall in love with a robot. Humans most definitely have strong emotions towards machines. In Japan, a mass funeral was held to Sony’s robot-dogs AIBO, after dozens were broken and the company stopped repairs. Owners were left heartbroken. In the same way, the pain of being harmed by a robot is real and needs addressing. Punishment could help.
  3. Deterrence — The study mentions the possibility of other ‘smart robots’ being affected by one of their own being punished, however, the author agrees that it’s a remote possibility and a weak point. The deterrence effect, however, could change manufacturers' behavior, and motivate them to implement further preventive measures.

How do we punish a robot? There is an obvious choice, shutting it down. That wouldn’t be enough. What happens if a ‘smart robot’ damages someone’s property? Will shutting down the robot take care of the damage?

The European Commission of Legal Affairs suggested a couple of interesting ideas:

  1. An insurance plan for robots. According to the commission’s suggestion, the manufacturers would pay a certain amount for insurance to cover every potential damage that the robot could be responsible for.
  2. Paying the robots a ‘wage’. Automation is replacing the human workforce at an astonishing speed. By some predictions, in the next 9 years, 20 million manufacturing jobs would be made redundant and replaced by robots. Paying the robots a ‘wage’ could not only help create a compensation fund in case of damages, but also help with serious potential implications to welfare payments, government-funded social services, and taxes caused by the lack of paid human workers.


I first encountered the question of forgiving a robot in a paper by Dr. Michael Nagenborg, a researcher at the University of Twente in the Netherlands.

Dr. Nagenborg focuses on the philosophical aspects of forgiving a machine. As forgiveness is a crucial part of human-human relationships, he claims this would also be a major part of human-robot interactions.

It does make sense. Robots are a big part of our lives and keep getting bigger and more important. We have already discussed the fact that a robot could not only make a mistake but also be liable for it. Alongside punishment, forgiveness is humankind’s way to move forward.

When are we more likely to forgive a robot? In a recent study called ‘Robots at Work’, researchers from the National University of Singapore, Yale University, and Texas A&M University conducted two experiments to find the answer.

The first experiment was done in the ‘Henn-na Hotel’ in Japan, the world’s first robot-staffed hotel. 194 hotel guests participated and were divided into two groups. Both groups were asked to check-in and check-out using the service robots at the front desk.

One group was told to “think of the robots as if they were humans”, the second group was not.

The study showed that when guests thought of the robot as more human-like, they tended to perceive them as sharing human attributes. These guests felt that “robots can think”, “robots can remember things”, and that “robots can feel pain”.

The more a guest perceived a robot as being human-like, the more satisfied the guest was by the service, and the less unsatisfied he/she was in the case of an error made by the robot.

The second experiment was done in a lab. This time, the researchers used a robotic arm with a screen.

Participants sat down in front of the robot with two different snacks placed in front of them. The participant chose a snack and the robot, which the participant thought was about to serve the chosen treat, was programmed to get it wrong.

The participants were divided into two groups, each was facing a slightly different robot arm:

  1. The robot spoke in a metallic-robotic voice. The screen was blank. The robot’s name was ‘Robotic Arm 57174’.
  2. The robot spoke in a female voice. The screen showed a face with moving lips as if talking. The robot’s name was ‘Alison’.

Same as with the first experiment, the participants who were facing ‘Alison’ had a significantly higher rate of satisfaction from the service, and were less unsatisfied when received the wrong snack.

These findings correspond with a different study, that showed that people feel more at ease interacting with robots with screens displaying human faces. Robots speaking in a female voice, with a female name, is the easiest to relate to — in case you were wondering why Alexa and Siri are both women.

The questions of robots' liability, punishment, and forgiveness are challenging but the debate is relevant and important. As machines are getting smarter, humans would have to think about all consequences of the co-life we will share.

Robots help clean our homes, they remind us to take the pie out of the oven, they drive us and help us with those nasty parallel parking spots. But robots make mistakes.

In 2019, the Henn-Na hotel actually ‘laid off’ half of its robot staff for causing too much trouble. The robot at the check-in desk couldn’t photocopy documents, the robot luggage carriers kept getting stuck, and the robot concierge couldn’t answer simple questions.

But someday, and this day is probably sooner than later, robots mistakes could be much graver than delivering a suitcase to the wrong room. Robots can, and would, be able to cause us real harm. Someone would have to pay the price. We will have to choose whether to punish, and whether to forgive.


1. Photo by Jesse Chan on Unsplash

2. The Henn-Na Hotel in Japan was the world’s first ‘robot operated hotel’. Photo: Henn Na Hotel

3. A robotic arm by Franka Emika. Photo:

Comments / 0

Published by

I love writing about what I love. Journalist. Always curious. Israeli born, London based. Father, Husband, and a dog person.


More from Elad Simchayoff

Comments / 0