• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence would robots, androids and cyborgs have civil rights?

I'm sorry. I want to continue this but this place being slower that pond water today has really irritated the flip out of me. I will try again tomorrow.
 
People have tried to bring class action suits on behalf of pet dogs and cats. Part of it was humans should not be allowed to technically own pets.
 
People have tried to bring class action suits on behalf of pet dogs and cats. Part of it was humans should not be allowed to technically own pets.

I suppose it comes down to degree of intelligence/sentience. Many species of animals may display intelligence/sentience to some degree, but is that enough to qualify for civil rights.

My earlier comment assumed a level of intelligence a least equal to human.
 
If a robot is to be considered as having rights, first one has to demonstrate it has needs for such. To my way of looking at the problem, if it is a problem, that would be something the machines themselves need to thrash out with us when they begin to design or birth themselves.

So you're against human rights? Humans are also just machines. We're "designed" by evolution and are built through self replication. But we're still just machines. Human beings didn't have civil rights most of our history, so we clearly don't need them.

I don't think there's any significant difference between robots and humans. Already. Whatever significance difference we think there is is just down to projection.

No. As I wrote '...one must demonstrate it (one) has needs for such ...'. Couple that with the difference between how one designs another and how one is designed by one's consequences among others and you have your demonstrator mechanism in place. There is a difference between evolution and design.
 
Your first mistake is equating intelligence with whatever humans are doing. It would be a complete waste of time making an AI that thinks like a human. Because we get so much wrong. So it'll never happen. It's much more valuable to create programmes that are slower, but gets it right. Which is what all machine learning research is focused on.

We have a bad habit of using human thinking as the ONLY way to measure intelligence. So the more like a human something is thinking the smarter it is. It's dumb, and philosophically impoverished. It's just heritage from Christian theology and the need for humans to be special and God's chosen creatures.



It'll be autonomous in the same way humans are. The "self" is redundant. It's included in autonomous.



Your spelling is challenged to the point where I couldn't make sense of that.



What? I've never said it wasn't. But we'll have to suspend human civil rights first.

If I slap a robot it migh6 say ouch. Does that make it human or sentient?

Neither. It's just a response to a stimuli. A computer has a threshold value and programmed to respond in a certain way when that threshold is crossed. Humans work in the exact same way. It's just that the threshold value is programmed by evolution. But the result is identical. The only reason humans feel pain is so that our genes can steer us away from things that have been lethal to our ancestors. That's all it is.

We are no where near creating a Data. For a time he had an emotion plug in chip, a plaot device.

Again... why is that your baseline, or even a desirable goal? Nobody would build something like that because it's a waste of time and resources. We're more likely to build AI that is highly specialised and narrow. Because it's more beneficial to humans.

Again what is autonomous? I can navigate to the grocery store without help based on knowledge and experience. That does not mean I can make completely unbiased choices free of any subconscious conditioning.

That's why I said, "autonomous like a human". If humans are autonomous then so are computers. Even today. Humans aren't especially creative or smart. It's a low bar to cross.

The most amazing thing about human intelligence is our ability to just navigate a room without bumping into stuff, or catch a ball thrown at us. That the hardest to programme in robots. But this is something any little squirrel or bird can do with ease. So hardly a sign of intelligence.

Intelligence has no precise verbal definition. It to me men's able to work through new problems and conditions. Sea gulls figured out how to drop shell fish on rocks. Birds put nuts in front of cars.

That's why researchers don't call it AI. They call it "machine learning". There's a famous quote that goes "Machine learning is programmed in Python. AI is programmed in Powerpoint". In the field nobody ever mentions the word "intelligence" for the very reason you give.

It would mean creating an algorithm that allow such problem solving. I doubt that is possible. Goedel thought a human brain analog could be raised much as a child is.

Data was more computer than sentient, he was used more as a useful machine in the stories.

We're probably not going to build a robot like that. It would be very dangerous. What we want from robots is predictability. So we're never going to make them creative. Other than within narrow fields, like composing music or painting.
 
If a robot is to be considered as having rights, first one has to demonstrate it has needs for such. To my way of looking at the problem, if it is a problem, that would be something the machines themselves need to thrash out with us when they begin to design or birth themselves.

So you're against human rights? Humans are also just machines. We're "designed" by evolution and are built through self replication. But we're still just machines. Human beings didn't have civil rights most of our history, so we clearly don't need them.

I don't think there's any significant difference between robots and humans. Already. Whatever significance difference we think there is is just down to projection.

No. As I wrote '...one must demonstrate it (one) has needs for such ...'. Couple that with the difference between how one designs another and how one is designed by one's consequences among others and you have your demonstrator mechanism in place. There is a difference between evolution and design.

What? That was complete gibberish. Who cares if something is designed by humans are a product of evolution? What possible difference could that make? You sound like a Christian
 
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

And yet we observe that it IS possible. Or humans couldn't do it.

Unless we are to accept the crazy and repeatedly debunked religious idea of substance dualism.

We don't know what algorithms humans use to solve problems, but we do know that they must exist.

Just as heavier than air flight was known to be possible from the first time man saw birds fly - they didn't know how to do it, but to declare it impossible would have been (and was) insanity.
 
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

And yet we observe that it IS possible. Or humans couldn't do it.

Unless we are to accept the crazy and repeatedly debunked religious idea of substance dualism.

We don't know what algorithms humans use to solve problems, but we do know that they must exist.

Just as heavier than air flight was known to be possible from the first time man saw birds fly - they didn't know how to do it, but to declare it impossible would have been (and was) insanity.

Steve's opinion is also based on obsolete ideas on Machine Learning. We've had all the necessary technical breakthroughs now. There's a reason Machine Learning has become the hottest field in computing. Right now AI is problem solving left and right. I have a friend who is both an expert in machine learning as well as an expert in crystallography. He couldn't figure out how certain proteins folded for 20 years. Built an AI. Solved it in hours. That's creative problem solving, doing it better than a human. We're already in the goal.

We've already reached a point where the difference between machine thinking and human thinking are insignificant. At least if we discuss ability. What sets us apart now are human emotional (and physical) needs. Well... we're never going to programme those into a computer. Why would we? Which means that Skynet will never arise because no computer will ever feel the need to protect itself for survival.
 
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

And yet we observe that it IS possible. Or humans couldn't do it.

Unless we are to accept the crazy and repeatedly debunked religious idea of substance dualism.

We don't know what algorithms humans use to solve problems, but we do know that they must exist.

Just as heavier than air flight was known to be possible from the first time man saw birds fly - they didn't know how to do it, but to declare it impossible would have been (and was) insanity.

Steve's opinion is also based on obsolete ideas on Machine Learning. We've had all the necessary technical breakthroughs now. There's a reason Machine Learning has become the hottest field in computing. Right now AI is problem solving left and right. I have a friend who is both an expert in machine learning as well as an expert in crystallography. He couldn't figure out how certain proteins folded for 20 years. Built an AI. Solved it in hours. That's creative problem solving, doing it better than a human. We're already in the goal.

We've already reached a point where the difference between machine thinking and human thinking are insignificant. At least if we discuss ability. What sets us apart now are human emotional (and physical) needs. Well... we're never going to programme those into a computer. Why would we? Which means that Skynet will never arise because no computer will ever feel the need to protect itself for survival.

Well, we hope not.



Are you Sarah Connor?
 
Steve's opinion is also based on obsolete ideas on Machine Learning. We've had all the necessary technical breakthroughs now. There's a reason Machine Learning has become the hottest field in computing. Right now AI is problem solving left and right. I have a friend who is both an expert in machine learning as well as an expert in crystallography. He couldn't figure out how certain proteins folded for 20 years. Built an AI. Solved it in hours. That's creative problem solving, doing it better than a human. We're already in the goal.

We've already reached a point where the difference between machine thinking and human thinking are insignificant. At least if we discuss ability. What sets us apart now are human emotional (and physical) needs. Well... we're never going to programme those into a computer. Why would we? Which means that Skynet will never arise because no computer will ever feel the need to protect itself for survival.

Well, we hope not.

Are you Sarah Connor?

Another research field that is huge now is robot warriors. That's fully autonomous AI soldiers. Today every targeting system is manned by a human for legal reasons. A human has to push the button. But that was creatively re-interpreted in the Afghan and Iraq war, down to, a human needs to be present and able to cancel attacks when necessary.

I think the next big war will feature human soldiers supported by a fleet of drones, autonomous tanks and autonomous artillery.

I wonder how our relationship with war will change when going to war has zero human cost to one's own side. How will that shift public opinion?
 
Steve's opinion is also based on obsolete ideas on Machine Learning. We've had all the necessary technical breakthroughs now. There's a reason Machine Learning has become the hottest field in computing. Right now AI is problem solving left and right. I have a friend who is both an expert in machine learning as well as an expert in crystallography. He couldn't figure out how certain proteins folded for 20 years. Built an AI. Solved it in hours. That's creative problem solving, doing it better than a human. We're already in the goal.

We've already reached a point where the difference between machine thinking and human thinking are insignificant. At least if we discuss ability. What sets us apart now are human emotional (and physical) needs. Well... we're never going to programme those into a computer. Why would we? Which means that Skynet will never arise because no computer will ever feel the need to protect itself for survival.

Well, we hope not.

Are you Sarah Connor?

Another research field that is huge now is robot warriors. That's fully autonomous AI soldiers. Today every targeting system is manned by a human for legal reasons. A human has to push the button. But that was creatively re-interpreted in the Afghan and Iraq war, down to, a human needs to be present and able to cancel attacks when necessary.

I think the next big war will feature human soldiers supported by a fleet of drones, autonomous tanks and autonomous artillery.

I wonder how our relationship with war will change when going to war has zero human cost to one's own side. How will that shift public opinion?

Well to a great extent we already saw that in Desert Storm. The allied casualties were lower than would have been expected in an exercise of similar scale. Most were blue-on-blue incidents due to units having advanced far further than their air cover expected.

Of course, nobody knew that casualties would be so low until after the event; and the subsequent occupation was a whole other story.

But it's certainly a good question - what happens when casualties on our side are essentially an impossibility?

And if the locals resist human occupation forces, how will they respond to robots and drones?
 
Another research field that is huge now is robot warriors. That's fully autonomous AI soldiers. Today every targeting system is manned by a human for legal reasons. A human has to push the button. But that was creatively re-interpreted in the Afghan and Iraq war, down to, a human needs to be present and able to cancel attacks when necessary.

I think the next big war will feature human soldiers supported by a fleet of drones, autonomous tanks and autonomous artillery.

I wonder how our relationship with war will change when going to war has zero human cost to one's own side. How will that shift public opinion?

Well to a great extent we already saw that in Desert Storm. The allied casualties were lower than would have been expected in an exercise of similar scale. Most were blue-on-blue incidents due to units having advanced far further than their air cover expected.

Of course, nobody knew that casualties would be so low until after the event; and the subsequent occupation was a whole other story.

But it's certainly a good question - what happens when casualties on our side are essentially an impossibility?

And if the locals resist human occupation forces, how will they respond to robots and drones?

The locals won't resist robot occupation. Because resistance is futile. Lol.

We already know how they will respond. It's imperative that the robots don't kill civilians. So enemy soldiers will start dressing like civilians. We saw that in Afghanistan and Iraq. With the result that the drones started killing civilians. So robots will win the war but lose hearts and minds. It's still a better deal than having lots of one's own soldiers die.

I'm just really worried what happens when the inevitable clash between China and USA happens. China is also a super advanced high-tech country. They're basically on the same level as USA now. That's extremely worrying. I'm convinced the war over the Spratley's will be fought with nothing but drones and autonomous ships. Luckily nobody lives there. It's uninhabited islands. But it'll be spectacular and will show us what will come
 
What? That was complete gibberish. Who cares if something is designed by humans are a product of evolution? What possible difference could that make? You sound like a Christian

Humans still don't understand evolution nor do they practice it as designers. Just because we're products of a process does not make our behavior conform to that process. Rather we conform to the conditions that lead to our existence and not to the process of evolution. Who knows, one of us might have lang tail feathers to attract females.
 
What? That was complete gibberish. Who cares if something is designed by humans are a product of evolution? What possible difference could that make? You sound like a Christian

Humans still don't understand evolution nor do they practice it as designers.

Evolution is one of the most simple processes to understand. That's the beauty of it. So I'm not sure what it is you think humans don't understand about evolution?

Just because we're products of a process does not make our behavior conform to that process. Rather we conform to the conditions that lead to our existence and not to the process of evolution. Who knows, one of us might have lang tail feathers to attract females.

Wut? Of course our behaviour conforms to that process. That's why we haven't gone instinct. Perhaps you mean that evolution is incredibly inefficient? You'll have no argument from me on that. Yes, some of us actually do have tail feathers to attract females. So? It doesn't disprove evolution, or mean that we don't understand it.

There's also genetic drift. Bizarre stuff maintained in the genome because it either has no cost, or the gene that codes for it also codes for something else that is very useful for our survival and procreation.
 
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

And yet we observe that it IS possible. Or humans couldn't do it.

Unless we are to accept the crazy and repeatedly debunked religious idea of substance dualism.

We don't know what algorithms humans use to solve problems, but we do know that they must exist.

Just as heavier than air flight was known to be possible from the first time man saw birds fly - they didn't know how to do it, but to declare it impossible would have been (and was) insanity.

It is possible in a limited sense. Threw are adaptive systems. There are chips that figured out how to chip rocks to make tools to crack nuts. Coding the comple chimp creative capacity into an alTuring Machine algorithm would be impossible.

There are self modifying programs, the problem is placing limits to keep it from a runaway condition.

I doubt there is a solution other than a neural net that mimics the brain. Our creativities are a complex interaction of different areas of the brain plus knowledge plus experience that starts with birth. It is also affected by emotional states which are chemical based.

- - - Updated - - -

The robot union today went on strike over better quality lube oil.
 
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

And yet we observe that it IS possible. Or humans couldn't do it.

Unless we are to accept the crazy and repeatedly debunked religious idea of substance dualism.

We don't know what algorithms humans use to solve problems, but we do know that they must exist.

Just as heavier than air flight was known to be possible from the first time man saw birds fly - they didn't know how to do it, but to declare it impossible would have been (and was) insanity.

It is possible in a limited sense.

No, it's demonstrably possible. No 'limited sense' - it happens, so it must be possible.
 
A thought experiment.

You get into a car type you have never been in before. You quickly see how to

Find the ignition
adjust the steering wheel
open close windows
recogiize door handles
find and operate air conditioning, heting, and audio
window wipers
turn signals

Using a computer language with a robot with digital video and audio processing along with human dexterity, and sensors how would you code the same human capacity? Beyond generalizations and talking details.

Given that there can be a lot of optical clutter along with the things you want to perceive. A general solution.

Anyone who could accomplish that would gain i8nternational recognition, at least in engineering and science. Beyond a level of complexity logic becomes impossibly complex and convoluted.

I found in manufacturing that beyond a certain point in creating instructions it becomes impossible to reduce it to a set of discrete steps with conditional logic and jumps. It requires a human capacity to see and analyze and put it together.
 
A thought experiment.

You get into a car type you have never been in before. You quickly see how to

Find the ignition
adjust the steering wheel
open close windows
recogiize door handles
find and operate air conditioning, heting, and audio
window wipers
turn signals

Using a computer language with a robot with digital video and audio processing along with human dexterity, and sensors how would you code the same human capacity? Beyond generalizations and talking details.

Given that there can be a lot of optical clutter along with the things you want to perceive. A general solution.

Anyone who could accomplish that would gain i8nternational recognition, at least in engineering and science. Beyond a level of complexity logic becomes impossibly complex and convoluted.

I found in manufacturing that beyond a certain point in creating instructions it becomes impossible to reduce it to a set of discrete steps with conditional logic and jumps. It requires a human capacity to see and analyze and put it together.

So you consider your personal incompetence or inability to be a reflection of a physical law that renders anything YOU can't think of a way to do 'impossible'. Got you.

Meanwhile in reality, it doesn't matter how complex the task is; If we observe that there are physical systems that can do it, then we MUST accept that it is possible EVEN IF we don't know how to do it, and EVEN IF it's really complicated.

Humans can do it; Therefore it's possible algorithmically - unless you invoke the dualist craziness that says that human brains (or brains in general) are physically different from the rest of reality.
 
Your argument is called bootstrapping. Humans CAN do it so it will be done.

Scifi is wonderful. Practical reality much harder.

There may one day be a complete working model of the brain that can simulate a person. The commercial neural nets used for things like video pattern recognition is a first evolution.

I did not say never, I said it would b near impossible using computer algorithms, iow Turing Machines.

Back in the 80s AI was being proclaimed as and end to a lrge psrt of engineering work. It did have a significant impact but now what was ptedicted.

The idea was to reduce the knowledge of experts in a field to a set of rules. logic. There are practical limits to logic based systems. I would guess that linear Aristotelian logic in our brains is only a small subset of a hirer processing system. Continues vs discrete processing. Our brains proceeds complex situations with physical response very fast.
 
Back
Top Bottom