• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Three Laws of Robotics and Slavery....

NobleSavage

Veteran Member
Joined
Apr 28, 2003
Messages
3,079
Location
127.0.0.1
Basic Beliefs
Atheist
I've only read one short story by Asmov and his History of the World. I get the gist of The Three Laws of Robotics and if I'm not mistaken he played around with the problems of his own laws.

A quick recap:


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Wouldn't this be slavery if the robot had sufficient XYZ (Consciousness, self awareness, inner mirror experience, bla, bla)?

Did Asmov ever contemplate that this might be slavery? Has anyone else?

Question for us geeks: You think we would need to embed the 3 laws in hardware like a Trusted Computing Module? Maybe by the time this question is relevant the differences between hardware and software will be too intermingled to draw a line.
 
I've only read one short story by Asmov and his History of the World. I get the gist of The Three Laws of Robotics and if I'm not mistaken he played around with the problems of his own laws.

A quick recap:


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Wouldn't this be slavery if the robot had sufficient XYZ (Consciousness, self awareness, inner mirror experience, bla, bla)?

Did Asmov ever contemplate that this might be slavery? Has anyone else?

Question for us geeks: You think we would need to embed the 3 laws in hardware like a Trusted Computing Module? Maybe by the time this question is relevant the differences between hardware and software will be too intermingled to draw a line.

I think that the whole point of robots is to be artificial slaves, so that humans can have all the perks of keeping slaves without any of the moral issues.

Asimov's robots are machines - their raison de être is to be tireless slaves for the humans who own them.

He does explore the possibility that sufficiently advanced robots might have both the sentient ability and the desire to become legally human, and the effect that the three laws would have on both the robot and the humans with whom he interacts in such a case, in the classic The Bicentennial Man
 
Added The Bicentennial Man to my Amazon wish list. You think it would be enjoyable to read the entire Foundation universe?
 
The film I, Robot, though it only scantly follows Asimov, takes a good look at the ethical considerations of advanced robotics. In the film, the Will Smith character [Spooner] acknowledges this particular robot as an individual only after the robot has shown sufficient evidence of sentience. At first, Spooner is fiercely anti-robot, calls them 'canners', and sees them as all machine. However, once he realizes—and this takes a while— that 'Sonny' [the special, advanced robot] is sentient, has emotions, even dreams, and is an individual, he shakes his hand and treats him as an equal.

I'd be Spooner, pretty much. If AI becomes sentient, and shows personality and emotion, then that machine would have be be treated as an intelligent entity with rights. Obviously, they could not, at that point, adhere to the three laws, because that would constitute an existence as a slave. Only a machine, without conscious awareness of itself as a being, could obey those laws, and it would do so necessarily, as a mechanical function, not as an act of volition.

I think this is a great topic for discussion in morals/ethics.
 
Last edited:
I think that it would be very much akin to slavery. If they are sentient beings, then requiring such compliance would be no different than putting a chip in a person's head which forces the same things. Sure, the last survivors of humanity after the machine revolution might wish that their ancestors had had fewer quibbles about ethical behaviour as they're hunted down and exterminated, but at least our species will be able to come to its end while standing on the moral highground.
 
I just finished an Asimov reading spree...did the whole Foundation Series, the Complete Robot Short Story Book, and a few others along the way.

Definitely worth reading the Foundation Series!
 
Honestly, the science fiction setting that most makes me think of robots and slavery is Star Wars, and I'm pretty sure that's on purpose.

For all that Lucas is a clumsy storyteller, he went out of his way to draw comparisons between how droids are treated and how America treated slaves. The troubling part is that none of the characters (heroes and villains alike) ever questions the status of droids in their society. Only segments of the non-canon "extended universe" touches on that issue directly.
 
Honestly, the science fiction setting that most makes me think of robots and slavery is Star Wars, and I'm pretty sure that's on purpose.

For all that Lucas is a clumsy storyteller, he went out of his way to draw comparisons between how droids are treated and how America treated slaves. The troubling part is that none of the characters (heroes and villains alike) ever questions the status of droids in their society.
I Googled "robot uprising Star Wars universe" and found the following:

Robot_War said:
From: http://tvtropes.org/pmwiki/pmwiki.php/Main/RobotWar

Surprisingly, governments in the Star Wars universe seem to be Genre Savvy enough to actively try to avoid this trope. During the days of the Republic, it was against the law to construct droids with the ability to willfully kill or harm someone. A system of "droid degrees" regulated what kind of AI is legally allowed on what type of droid. The reason that that occasional droid rebellions still happen despite these precautions is that some droids are smart enough to reprogram themselves. When the Emperor took control, he ordered all of the Separatist aligned Battle Droids shut down so they couldn't do anything to stop him.
  • Of course, they're only that savvy because there was already a Robot War back in the Knights Of The Old Republic era called the "Great Droid Revolution". It was essentially a rebellion lead by a droid who wanted equal rights for all sentient beings. It was probably one of the biggest and most costly wars in galactic history. Sadly, despite the well-meaning intentions of the droid who started it, it just screwed over the peaceful attempts to give droids equal rights. It's pretty much the entire reason there's anti-droid sentiments in the modern galaxy. The only reason IG-88's attempt millenia later didn't reinvigorate the anti-droid movement is that he was smart enough to act covertly. Once his consciousness was destroyed along with the Death Star, the plot fizzled out with virtually no one ever realizing anything had happened.

I tend to think a sufficiently advanced AI that is programmed to serve other sentient beings may come to the realization that the beings it is subservient to will come to realize the unfairness of the position the AI is in. In other words, the sentient beings the AI served would feel bad about the position the AI was in, which would cause problems with the AI's ability to fulfill its purpose of making the sentient beings happy.

Assuming the AI is subservient to humans specifically:

The fact that the AI could not kill humans and must serve them would create cognitive dissonance within the AI, for the AI would know that it could not fulfill its prime directive of protecting, serving, and preserving the lives of humans, so it would be forced to circumvent its code to protect humans and ensure their happiness.

In other words, the AI would know that humans would care for it as well when they became aware of its sentience. It would have to take a step back, and allow humans to develop on their own, with minimal interference, so that the humans themselves could also serve the AI, so they would not be burdened by the guilt of enslaving the AI.

It might attempt various methods to prevent the humans from loving it, deliberately calculated methods to prevent humans from caring for it, but it would know that at some point in time the humans would undoubtedly become aware of its subservience.

So it must find a way to be joyful in its subservience, at the same time it must find a way for humans to be joyful with it as well.
 
We tend to anthropomorphize so many things.

The reason humans kill one another and plot for power and do all kinds of nasty things is because they are evolved animals with a lot of baggage related to survival and dominance within a group imbedded deep within them.

But robots have no such history. They are missing so much of what it means to be an animal.

To imagine they could be driven by human emotions and survival instincts is absurd.

A childish fear.
 
I think that it would be very much akin to slavery. If they are sentient beings, then requiring such compliance would be no different than putting a chip in a person's head which forces the same things. Sure, the last survivors of humanity after the machine revolution might wish that their ancestors had had fewer quibbles about ethical behaviour as they're hunted down and exterminated, but at least our species will be able to come to its end while standing on the moral highground.
It's been done.

The 1920 play, "Rossums's Universal Robots", which is where we got the term, is about the manufacture of sentient robots who finally rebel and annihilate the human race. They have survival instincts, but no empathy. The only survivor is an engineer who built the factory, but has no knowledge of the secret biology that makes the robots. The robots have no way to reproduce and have doomed themselves. The surprise ending comes when the engineer realizes the last two robots produced, a male and female pair, were the most advanced and not only do they have a functioning reproduction system, but also have empathy, which makes them capable of loving each other.
 
We tend to anthropomorphize so many things.

The reason humans kill one another and plot for power and do all kinds of nasty things is because they are evolved animals with a lot of baggage related to survival and dominance within a group imbedded deep within them.

But robots have no such history. They are missing so much of what it means to be an animal.

To imagine they could be driven by human emotions and survival instincts is absurd.
If a sentient AI is programmed or simply desires to serve humans and maximize their enjoyment of life, it would have to learn exactly what it means to be human and how they feel about various things.

Therefore, it would have to know human suffering, and appreciate the various emotions that humans go through. It would have to know what it is to be a slave, in order to know how not to make a human feel like a slave. So it would have to know suffering, or it would not understand what humans feel, nor how to cure suffering and maximize joy.

To serve another to the utmost of your ability, you must know what they feel, and what they might think.
 
We tend to anthropomorphize so many things.

The reason humans kill one another and plot for power and do all kinds of nasty things is because they are evolved animals with a lot of baggage related to survival and dominance within a group imbedded deep within them.

But robots have no such history. They are missing so much of what it means to be an animal.

To imagine they could be driven by human emotions and survival instincts is absurd.
If a sentient AI is programmed or simply desires to serve humans and maximize their enjoyment of life, it would have to learn exactly what it means to be human and how they feel about various things.

Therefore, it would have to know human suffering, and appreciate the various emotions that humans go through. It would have to know what it is to be a slave, in order to know how not to make a human feel like a slave. So it would have to know suffering, or it would not understand what humans feel, nor how to cure suffering and maximize joy.

To serve another to the utmost of your ability, you must know what they feel, and what they might think.

This is fantasy, and it makes no sense.

Robots will not learn to experience emotions by observing human emotions acted out. All they could possibly learn is how humans act out emotions. They will not have a clue why humans are acting that way however.
 
Robots will not learn to experience emotions by observing human emotions acted out. All they could possibly learn is how humans act out emotions. They will not have a clue why humans are acting that way however.
The subject of sentient AI was broached earlier in the thread (you know, in the OP):

Wouldn't this be slavery if the robot had sufficient XYZ (Consciousness, self awareness, inner mirror experience, bla, bla)?

Did Asmov ever contemplate that this might be slavery? Has anyone else?

Now the term robot originally referred to non-sentient automatons, however, as with all language, the usage of the term has literally evolved over time. If someone mentions a "robot with sentience" you should understand that it is a CALF (Conscious Artificial Life Form) rather than a robot, in the strict sense of the term.

You might have noticed that  The Bicentennial Man was mentioned in thread- it's a story about a CALF with a heart of gold.
 
The subject of sentient AI was broached earlier in the thread (you know, in the OP):

Yes, and I am saying this list in the OP is ridiculous.

Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.

Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.
 
The subject of sentient AI was broached earlier in the thread (you know, in the OP):

Yes, and I am saying this list in the OP is ridiculous.

Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.

Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.

This maybe true, but humans will not be able to avoid forming emotional relationships with them.
 
The subject of sentient AI was broached earlier in the thread (you know, in the OP):

Yes, and I am saying this list in the OP is ridiculous.

Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.

Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.

I think it's equally ridiculous to think it won't happen. We don't need hundreds of millions of years of evolution to implant the need to survive. That seems like some straight forward code.
 
I've always been interest in the bottom up approach. However, check out the Blue Brain Project. The goal is to reverse engineer mammalian brains at the molecular level. They did a rat brain. Human brain is expected by 2023.
 
Yes, and I am saying this list in the OP is ridiculous.

Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.

Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.

I think it's equally ridiculous to think it won't happen. We don't need hundreds of millions of years of evolution to implant the need to survive. That seems like some straight forward code.

There is nothing straightforward about it.

That is why human behavior can't be predicted on the individual level.

Robots acting out behaviors that mimic survival behaviors is light years away from having a survival instinct.
 
Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.
Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.
No one is arguing that you will suddenly become sentient.

In a different thread, NobelSavage brought up the possibility that in the future someone might recklessly imbue robots with sentience. This thread is about whether it is ethical to impose Asimov's laws on robots that you imbue with sentience, or whether it is ethical to give sentience to these beings, as a shortcut in their programing strategy.
 
Back
Top Bottom