We need to talk about your killer robot problem

11 July 2016

Over the last few days, the news has been full of articles about the Dallas police department using a robot to deliver, and detonate a bomb in order to end a standoff with an alleged offender.

We're not seeking to trivialise the complex and traumatic events in and around Dallas, but we believe there are some interesting points about how we interact with robots, and more importantly...how they interact with us.

Robots as an extension of ourselves

We live in a world where we interact with robots every day. For example, Lower Hutt Library has a robot that uses the RFID tags in the books to sort them into different piles for re-shelving.

For the most part however these robots have been created to perform actions which are a direct extensions of their creators will. They have either executed a very simple set of instructions (like a robot on a car assembly line), or followed more real time detailed instructions from a human operator (like a flying drone, or bomb disposal robot). What’s common in all those situations is that it’s pretty clear that the entity behind the actions performed by the robot is human. In other words the mechanical agent itself has no agency…or does it?

Increase in AI

We are also seeing a massive increase in AI research and use:

This is great and looks to be able to provide some real benefits to humanity. There is however a point at which these creations begin to take actions which involve some deep moral consequences.

Robots with autonomy

So at some point in the near future, we’re going to have robots with AI. At that point we have to accept that these creations have a form of autonomous agency.

That’s ok as long as they are still following the exact instructions of their human controllers, but what happens when we ask the robot to start to interpret high and higher level instructions.

“Clean the Car” rather than “Pick up sponge. Wet Sponge. Wipe car with Sponge.” Or “fly over area XX.XXX.XXX, XX.XXX.XXX and take video of any rabbits you see,” rather than, “turn left, hold altitude, record video now.”

As you can imagine, there’s real advantages to being able to task agents with high level directives and allow them to ‘fill in the blanks.’ It frees us up to get on with out lives.

The issue here is when the agent ‘fills in the blanks’ and someone ends up getting hurt. The Tesla crash is a real example - a tragic road death, but in this case with a computer behind the wheel. Fortunately someone’s thought about this before.

The three laws of robotics

In his 1950 novel ‘I, Robot’ Isaac Asimov explored some of these very issues. When we give instructions to a robot, what rules, or moral code if you will, should it apply. In his world any robots which were created were made to comply with these three rules as an absolute.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

    The order of these rules was VERY important as the cartoon below from xkcd shows.

    xkcd view of the importance of robotic rules ordering

    xkcd view of the importance of robotic rules ordering

    You can read some more of Asimov’s novels to see how this all pans out, or even watch a movie adaptation =). One of the ideas that comes out is how rules don’t always work as expected. Another is how we should judge computer behaviour - humans aren’t perfect either, and even imperfect robot actions can improve our safety and our enjoyment of life.

    Back to Dallas…

    It’s still too early to know exactly what happened in Dallas. Was the robot operating under direct human control? Or was it interpreting higher level directives, for example, “take this package and detonate it within X feet of this person”?

    If it was the latter, then in an Asimovian world the robot would refuse, citing an impending breach of the first law of robots.

    It would be very easy to miss the right time to have an informed discussion about how we want to proceed here. Whether the specifics of this incident are ever repeated, it’s an obvious trend that robots and AI are going to be more involved in our lives. Working out how we want them to serve us, safely and usefully is, in the spirit of one of the Issues Team Mantras, a conversation we should have sooner rather than later…“Decide, Don’t Slide.”