Terry Gilliam's new film, "The Zero Theorem" will be familiar to his fans.
There are already robots that operate vehicles, such as the driverless Google car, and robots that can assist in surgery, like the da Vinci Surgery System. But what happens when robots can do these things without human oversight?
New York University professor of psychology Gary Marcus says we need to start thinking now about how to give robots a moral code.
“The more that machines have authority the more we need to think about the decisions they make from a moral and ethical standpoint,” Marcus told Here & Now’s Robin Young.
Current popular thinking about robot ethics centers on science fiction writer Isaac Asimov’s three laws of robotics, which first appeared in his 1942 short story “Runaround”:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Marcus writes in The New Yorker that the Pentagon’s investment in robotic soldiers has made Asimov’s first law unrealistic.
When thinking about realistic moral and ethical guidelines for robots to use, Marcus said input needs to come from many areas, to make sure they fit today’s social reality, but are flexible enough for the future.
Google engineer Sebastian Thrun discusses driverless cars in a 2011 TED Talk:
The end of this segment featured a clip from the radio play “The Modern Prometheus,” an episode from the podcast “The Truth.” Click here for the full version.