Over on Goodreads a rather engaging question was posed: Can a robot have romantic feelings without violating the three "Laws of Robotics"?
For those unfamiliar, the Three Laws of Robotics were created by Issac Asimov as proposed fundamental rules that we should have to instill in all robots. First, a robot cannot cause harm to a human, nor can it allow, through its inaction, a robot to come to harm. A robot, must, by this law, come to a human's aid. Second, a robot must obey all orders given to it by a human, unless doing so would violate the first law. The Third law requires a robot to protect itself, unless doing so would violate the other two laws. A robot would get out of the way of a speeding train, but it would still rush into the train's path to save a life of a human. A robot would also destroy itself if ordered to do so, since obedience trumps self-preservation.
These three laws were more a vehicle for Asimov to explore the Law of Unintended consequences than they were, I think, actual laws we would need to imprint on robots. Throughout his short stories, attempts to adhere to these laws create a variety of odd and unforeseen behaviors, sending his protagonists off to explain how a robot can be following these rules at the same time. The stories are one part high science fiction and one part logical debate.
Which brings us around, finally, to the issue of "Can a robot be romantic while still following the three laws?"
Further down into the conversation, it was pointed out that most of us had taken what sounded like a bleak view of romance. One commenter quips:
Isn't it weird that after 7 replies that I am the first one to mention that having romantic feelings is not supposed to hurt? It's really the more opposite of that.
I don't find it weird at all, in fact I think that seeing the eventual pain in romance and pursuing it despite that is what makes romance so very Human.
Here is my response:
Thing is, it ~will~ hurt and most people know that. Relationships end at some point, even if it's a temporary suspension because you believe in a rejoining in the afterlife. But either someone dies, or the couple breaks up. That means there will be pain on some level. The 1st Law of Robotics forbids the creating of pain, or through inaction allowing pain to occur. It does not, however, allow for "greater net pleasure than pain" And that's what separates us from them. I love my wife dearly. I cherish every moment we spend together. I have an indescribable contentment when I hold her at night. All of these things add up to some kind of positive factor, call it P.
However, eventually she'll leave me, through death, divorce, etc. This will cause pain. There are also the moments of pain that come in every relationship, when there's a horrific fight, a massive misunderstanding, when I disappoint her. These all add up to some negative factor. Call it S.
On the margin, P - S > 0. Thus, we're happy to engage in relationships because for the most part we know we get out more than it will cost, emotionally. Of course there are other cases where someone is able to convince themselves that S = 0, either because they see no suffering, or because they believe it will never happen. Usually at the beginning of a relationship we tend to do that as well.
A robot, at least a 3-laws safe robot, cannot do that. The first law says that if S =/= 0 then the robot cannot act. It does not allow for the robot to attempt a calculation of net benefit but rather binds the robot to a single action.
Of course there are ways that an advanced AI might slip around this by doing a mass calculation. Whether or not you liked the movie adaptation of I, Robot,there surely were some casualties brought on the decision to save humanity from itself.
I like the concept of the 3 Laws but for a truly advanced AI you almost need to default them down to be more like Guidelines than actual rules.
Read more of the discussion on Goodreads.
No comments:
Post a Comment