In 1941, science-fiction writer Isaac Asimov stated "The Three Laws of Robotics," in his short story "Runaround."
Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Law Two: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
Law Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws come from the world of science fiction, but the real world is catching up. This month, a law firm gave Pittsburgh's Carnegie Mellon University $10 million to explore the ethics of artificial intelligence or AI. This comes after industry leaders recently joined together to form the group called the Partnership on Artificial Intelligence to Benefit People and Society.
Peter Kalis is chairman of the law firm, K&L Gates. He says technology is dashing ahead of the law, leading to questions that were never taken seriously before. Such as what happens when you make robots that are smart, independent thinkers… and then try to limit their autonomy?
Kalis suggests that one day we'll want laws to keep our autonomous robots from running amok, but that we'll also have to consider the ramifications of extending these potentially sentient AI entities legal protections, as defined by the U.S. Constitution.
"The Constitution says that every person should benefit from equal protection under the law. Well, I don't think anyone contemplated that person would include an artificially intelligent robot," Kalis says. "Yet I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it's a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law."
With the law firm's gift, Carnegie Mellon President Subra Suresh says the university will be able to dig into issues now emerging within automated industries… "Take driverless cars," he says. "If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?"
The issues go beyond self-driving cars and renegade robots. Inside the next generation of smartphones, in those chips embedded in home appliances, and the ever-expanding collection of personal data being stored in the "cloud," questions about what's right and wrong are open to study.
So are Asimov's three laws of robotics all there is to govern AI right now? ...and is it necessary to have a moral guideline that everyone can understand? "I think putting all three laws into one: Do no harm, could be the very first one," Suresh says. He says humanity today is approaching "an interesting point at the intersection of humans and technology" …one we don't have any prior experience with...
Original Article @ NPR