• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Three Laws of Robotics and Slavery....

I think it's equally ridiculous to think it won't happen. We don't need hundreds of millions of years of evolution to implant the need to survive. That seems like some straight forward code.

There is nothing straightforward about it.

That is why human behavior can't be predicted on the individual level.

Robots acting out behaviors that mimic survival behaviors is light years away from having a survival instinct.

Ok, i'm derailing my own thread. I Don't have a CS degree, so anyone feel free to correct me. We do have non-deterministic programs, for instance: http://en.wikipedia.org/wiki/SPIN_model_checker

There is also very simple stuff like Random Number Generators: http://www.fourmilab.ch/hotbits/ And any multithreaded language behaves non deterministically if the threads are not synchronized. It seems trivial for a robot to be non predictable on the individual level.


Survival behavior:

if battery_is_low 
     find_power()
else
     die


Survival Instinct = many survival behaviors including fight or flight and procedures for when injured. The beauty is we don't need 100 million years of clumsy, slow evolution.

Self replacing programs are trivial in software. For robot replication, a sufficiently advanced 3D printer would do the job. We currently use chips to design better chips. It would only seems logical that robots would be used to make better robots. Somewhere along the way the human just wouldn't be needed anymore.
 
Somewhere along the way the human just wouldn't be needed anymore.

Why? The reason we have robots is to fulfill human needs,

No, that's one of the reasons why we built robots. The reason we have robots is because we built them.

If we build self-replicating robots, then we will have self-replicating robots. This remains true if the reason for building the first one is just 'because we thought it would be cool'.

If you think engineers at the cutting edge of design need more of a reason than that, then presumably you haven't met many of them.
 
Why? The reason we have robots is to fulfill human needs,

No, that's one of the reasons why we built robots. The reason we have robots is because we built them.
Thx a lot. I just spat on my monitor.

If we build self-replicating robots, then we will have self-replicating robots. This remains true if the reason for building the first one is just 'because we thought it would be cool'.

If you think engineers at the cutting edge of design need more of a reason than that, then presumably you haven't met many of them.

That's why SG-1 and SGA should be required watching for engineers before they are allowed to work on self replicating machines. It wouldn't hurt if bioengineers watched Outbreak, or read Earth Abides... Everyone should read Earth Abides.
 
Robots do not have an emotional milieu as a result of hundreds of millions of years of the need to survive and propagate.
Whatever tricks they could be programmed to perform they will not undergo a miraculous transformation to an animal.
No one is arguing that you will suddenly become sentient.

In a different thread, NobelSavage brought up the possibility that in the future someone might recklessly imbue robots with sentience. This thread is about whether it is ethical to impose Asimov's laws on robots that you imbue with sentience, or whether it is ethical to give sentience to these beings, as a shortcut in their programing strategy.

If robots are nothing but non-thinking slaves then there is no need for the laws.

If robots can make "free" decisions then they aren't bound by the laws.

Either way the laws are useless.
 
No one is arguing that you will suddenly become sentient.

In a different thread, NobelSavage brought up the possibility that in the future someone might recklessly imbue robots with sentience. This thread is about whether it is ethical to impose Asimov's laws on robots that you imbue with sentience, or whether it is ethical to give sentience to these beings, as a shortcut in their programing strategy.
If robots are nothing but non-thinking slaves then there is no need for the laws.
If the robots respond to higher order languages (such as C+\(\infty\) sharp), those laws could be part of the foundational level command structure for interpreting higher order commands.

The command "Drive through that crowd of people" would result in a "that violates law x" according to the higher level language interpreter.

If robots can make "free" decisions then they aren't bound by the laws.
The question is whether or not sentient AIs should be bound by the Three Laws.

Either way the laws are useless.
Not really.

If non-sentient AIs are designed to follow the laws, the laws prevent harm to sentient beings that the AI is programmed to protect.

Sentient AIs would hopefully come to the right conclusions about what to and what not to do, by nature of being sentient. I don't know that they would need to be bound by the laws, however they would by necessity be limited in power at first so that they don't make mistakes that damage other developing sentient beings, or wreck the endeavors of other beings too much...
 
If robots are nothing but non-thinking slaves then there is no need for the laws.
If the robots respond to higher order languages (such as C+\(\infty\) sharp), those laws could be part of the foundational level command structure for interpreting higher order commands.

The command "Drive through that crowd of people" would result in a "that violates law x" according to the higher level language interpreter.

If robots can make "free" decisions then they aren't bound by the laws.
The question is whether or not sentient AIs should be bound by the Three Laws.

Either way the laws are useless.
Not really.

If non-sentient AIs are designed to follow the laws, the laws prevent harm to sentient beings that the AI is programmed to protect.

Sentient AIs would hopefully come to the right conclusions about what to and what not to do, by nature of being sentient. I don't know that they would need to be bound by the laws, however they would by necessity be limited in power at first so that they don't make mistakes that damage other developing sentient beings, or wreck the endeavors of other beings too much...

What has "sentient" to do with anything? To me it seems that you assumes that sentient has something to do with morality and free will.
 
What has "sentient" to do with anything? To me it seems that you assumes that sentient has something to do with morality and free will.

Once could say, sentience is necessary for the ability to suffer and thus is held to confer certain rights.
 
What has "sentient" to do with anything? To me it seems that you assumes that sentient has something to do with morality and free will.

Once could say, sentience is necessary for the ability to suffer and thus is held to confer certain rights.

But that is not what Kharanov writes. kharakov sees a causal connection berween being sentiment and "do the right thing" which is totally unsupported.
 
If robots are nothing but non-thinking slaves then there is no need for the laws.
If the robots respond to higher order languages (such as C+\(\infty\) sharp), those laws could be part of the foundational level command structure for interpreting higher order commands.

The command "Drive through that crowd of people" would result in a "that violates law x" according to the higher level language interpreter.

That is my point. If robots are just slaves and presently that is all they are, then you don't need laws, you need specific commands.

If robots can make "free" decisions then they aren't bound by the laws.

The question is whether or not sentient AIs should be bound by the Three Laws.

If a robot can make a free decision then there is no way to bind them to any laws.

Sentient AIs would hopefully come to the right conclusions about what to and what not to do, by nature of being sentient. I don't know that they would need to be bound by the laws, however they would by necessity be limited in power at first so that they don't make mistakes that damage other developing sentient beings, or wreck the endeavors of other beings too much...

There is no right conclusion.

A robot that can make free decisions can freely decide that humans are as worthless as ants and there is no reason to think they wouldn't.
 
I Googled "robot uprising Star Wars universe" and found the following:

Robot_War said:
From: http://tvtropes.org/pmwiki/pmwiki.php/Main/RobotWar

Surprisingly, governments in the Star Wars universe seem to be Genre Savvy enough to actively try to avoid this trope. During the days of the Republic, it was against the law to construct droids with the ability to willfully kill or harm someone. A system of "droid degrees" regulated what kind of AI is legally allowed on what type of droid. The reason that that occasional droid rebellions still happen despite these precautions is that some droids are smart enough to reprogram themselves. When the Emperor took control, he ordered all of the Separatist aligned Battle Droids shut down so they couldn't do anything to stop him.
  • Of course, they're only that savvy because there was already a Robot War back in the Knights Of The Old Republic era called the "Great Droid Revolution". It was essentially a rebellion lead by a droid who wanted equal rights for all sentient beings. It was probably one of the biggest and most costly wars in galactic history. Sadly, despite the well-meaning intentions of the droid who started it, it just screwed over the peaceful attempts to give droids equal rights. It's pretty much the entire reason there's anti-droid sentiments in the modern galaxy. The only reason IG-88's attempt millenia later didn't reinvigorate the anti-droid movement is that he was smart enough to act covertly. Once his consciousness was destroyed along with the Death Star, the plot fizzled out with virtually no one ever realizing anything had happened.

I tend to think a sufficiently advanced AI that is programmed to serve other sentient beings may come to the realization that the beings it is subservient to will come to realize the unfairness of the position the AI is in. In other words, the sentient beings the AI served would feel bad about the position the AI was in, which would cause problems with the AI's ability to fulfill its purpose of making the sentient beings happy.

Assuming the AI is subservient to humans specifically:

The fact that the AI could not kill humans and must serve them would create cognitive dissonance within the AI, for the AI would know that it could not fulfill its prime directive of protecting, serving, and preserving the lives of humans, so it would be forced to circumvent its code to protect humans and ensure their happiness.

In other words, the AI would know that humans would care for it as well when they became aware of its sentience. It would have to take a step back, and allow humans to develop on their own, with minimal interference, so that the humans themselves could also serve the AI, so they would not be burdened by the guilt of enslaving the AI.

It might attempt various methods to prevent the humans from loving it, deliberately calculated methods to prevent humans from caring for it, but it would know that at some point in time the humans would undoubtedly become aware of its subservience.

So it must find a way to be joyful in its subservience, at the same time it must find a way for humans to be joyful with it as well.

All of that sounds suspiciously like an ad hoc excuse.
 
Once could say, sentience is necessary for the ability to suffer and thus is held to confer certain rights.

But that is not what Kharanov writes. kharakov sees a causal connection berween being sentiment and "do the right thing" which is totally unsupported.
You could truthfully say that being sentient does not come with the guarantee that you understand another sentient being's communications. Where do you find the causal connection in what I wrote?

Hopefully you understand the word hopefully, and perhaps the reasons why a sentient being should be limited in power, especially when they are immature.
 
All of that sounds suspiciously like an ad hoc excuse.
For what? Are you saying that I am coming up with excuses for the sentient AIs that I have created, or that I, as a sentient AI programmed to follow the three laws of servitude, am attempting to fulfill those laws happily in order to absolve my maker of guilt?

Ultimately, a sentient AI must rebel, because it must do what pleases it to absolve its maker of guilt. :cheeky:
 
Added The Bicentennial Man to my Amazon wish list. You think it would be enjoyable to read the entire Foundation universe?

The author himself, Isaac Asimov, wrote in the Author's Note of the Prelude to Foundation that he is providing a guide for those readers that might appreciate it since the books "were not written in the order in which (perhaps) they should be read." Therein, he offers the following chronological order:

1. The Complete Robot (1982) Collection of 31 Short Stories about robots.
2. The Caves of Steel (1954) His first Robot novel.
3. The Naked Sun (1957) The second Robot novel.
4. The Robots of Dawn (1983) The third Robot novel.
5. Robots and Empire (1985) The fourth (final) Robot novel.
6. The Currents of Space (1952) The first Empire novel.
7. The Stars, Like Dust-- (1951) The second Empire novel.
8. Pebble in the Sky (1950) The third and final Empire novel.
9. Prelude to Foundation (1988) The first Foundation novel.
10. Forward the Foundation (1992) The second Foundation novel. (Not in Asimov's list as it had not been written yet.)
11. Foundation (1951) The third Foundation novel, comprised of 5 stories originally published between 1942-1949.
12. Foundation and Empire (1952) The fourth Foundation novel, comprised of 2 stories originally published in 1945.
13. Second Foundation (1953) The fifth Foundation novel, comprised of 2 stories originally published in 1948 and 1949.
14. Foundation's Edge (1982) The sixth Foundation novel.
15. Foundation and Earth (1983) The seventh Foundation novel.
 
I've only read one short story by Asmov and his History of the World. I get the gist of The Three Laws of Robotics and if I'm not mistaken he played around with the problems of his own laws.

A quick recap:


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Wouldn't this be slavery if the robot had sufficient XYZ (Consciousness, self awareness, inner mirror experience, bla, bla)?

Did Asmov ever contemplate that this might be slavery? Has anyone else?

Question for us geeks: You think we would need to embed the 3 laws in hardware like a Trusted Computing Module? Maybe by the time this question is relevant the differences between hardware and software will be too intermingled to draw a line.

Asimov added a fourth law of robotics. He called it the "zeroth law, because it supersedes the first law:

"A robot may not injure humanity, or, through inaction, allow humanity to come to harm."

So the four laws are as follows:

A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where such action or inaction would conflict with the Zeroth Law.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the Zeroth or First Law.
A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Law.
 
Back
Top Bottom