• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence would robots, androids and cyborgs have civil rights?

Your argument is called bootstrapping. Humans CAN do it so it will be done.
I never said it WILL be done; I said we observe that humans do it, so it is therefore wrong of you to say that it is impossible.
I did not say never, I said it would b near impossible using computer algorithms, iow Turing Machines.

There's no 'near' impossible; Things are possible, or they are not.

You said
...
It would mean creating an algorithm that allow such problem solving. I doubt that is possible.

Your doubt is misplaced; It's provably NOT impossible.

You need to learn to read. And to think. And to present joined-up arguments instead of a jumble of half-baked homilies, cut-and-paste wikipedia articles, and scraps of vaguely remembered truthiness.

Take a position. Defend it if you can, and man-up and admit your errors when you cannot.
 
A thought experiment.

You get into a car type you have never been in before. You quickly see how to

Find the ignition
adjust the steering wheel
open close windows
recogiize door handles
find and operate air conditioning, heting, and audio
window wipers
turn signals

Using a computer language with a robot with digital video and audio processing along with human dexterity, and sensors how would you code the same human capacity? Beyond generalizations and talking details.

Given that there can be a lot of optical clutter along with the things you want to perceive. A general solution.

Anyone who could accomplish that would gain i8nternational recognition, at least in engineering and science. Beyond a level of complexity logic becomes impossibly complex and convoluted.

I found in manufacturing that beyond a certain point in creating instructions it becomes impossible to reduce it to a set of discrete steps with conditional logic and jumps. It requires a human capacity to see and analyze and put it together.

Wut? All this looks trivial to me. The hard part would be figure out input system. Which is a headache. But today is doable. That's why we can build camera drones that follow you around at high speeds and manage to avoid smashing into things. Or self driving cars. It's a hard problem to solve, and require a lot of processing power. But we can do this now.

The rest is just a library of functions and mapping them.
 
Your argument is called bootstrapping. Humans CAN do it so it will be done.

Scifi is wonderful. Practical reality much harder.

There may one day be a complete working model of the brain that can simulate a person. The commercial neural nets used for things like video pattern recognition is a first evolution.

I did not say never, I said it would b near impossible using computer algorithms, iow Turing Machines.

Back in the 80s AI was being proclaimed as and end to a lrge psrt of engineering work. It did have a significant impact but now what was ptedicted.

The idea was to reduce the knowledge of experts in a field to a set of rules. logic. There are practical limits to logic based systems. I would guess that linear Aristotelian logic in our brains is only a small subset of a hirer processing system. Continues vs discrete processing. Our brains proceeds complex situations with physical response very fast.

Yes, of course. If humans can do it, so will computers be able to... at some point. Humans aren't special.

We have completely scrapped the 1950'ies and 1970'ies approach to creating an AI. The main problem was that they rested on a faulty model of the human brain. We just assumed that human brains were fully rational. Only in a defective brain was it not rational. We still have traces of this in our language when discussing mental illness.

The human brain ignores a tremendous amount of input data. Because our ancestors didn't need it for survival. Since we're only interacting with other humans we don't notice. It's not a valuable model upon which to base an AI on.

The reason why the human brain is so fast is because it's built with proteins (and not silicone). Proteins are very complex tightly folded molecules, and a lot of data can be stored on very little space. It's fed by glucose and ATP, ie a chemical engine. That requires a huge complex machinery based around replacing broken bits constantly as well as transporting fuel. Since it was built through evolution complexity is not a problem for it. But if we build it by conventional human means, we just don't have the technological maturity yet. Quantum computing might change this though. Any day. Since we now have quantum computers on the market.

The current paradigm of machine learning completely ignores the human brain. Instead it's focused on solving discreet problems. You're still talking about the old timey way of speaking about AI, as if the goal was to create a robot slave that looks and behaves like a human. We don't care about that today. It's purely a sex doll thing. Those will never be particularly smart. Not exactly a high priority for those customers.
 
Robby The Robot on Forbidden Planet.

HAL Space Odyssey. We are all biased by scifi. I read Heinlein's Moon Is A Harsh Mistress as a kid. There was a computer that temporarily became self aware.

AI has become a media catch phrase for an aspect of technology. TV ads for an exercise machine tout AI that will analyze and create a perfect workout for you.

For me AI is a targeted use of algorithms and rule based logic in specific applications. In engineering it is embedded in software tools that allow me to design a level of mechanical parts with little experience. It steps through a prices that it took years for a mechanical engineer to previously learn. AI being a distillation of experience and knowledge to a set of rules and algorithms.


The newer term is AC Artificial Consciousness. That would be Data on Star Trek. HAL.
 
https://en.wikipedia.org/wiki/Artificial_intelligence

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”.[2] Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".[3]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[4] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech,[6] competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[2] Analytical AI has only characteristics consistent with cognitive intelligence; generating a cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive and emotional intelligence; understanding human emotions, in addition to cognitive elements, and considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), is able to be self-conscious and is self-aware in interactions with others.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[8][9] followed by disappointment and the loss of funding (known as an "AI winter"),[10][11] followed by new approaches, success and renewed funding.[9][12] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[13] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[14] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[15][16][17] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[13]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14] General intelligence is among the field's long-term goals.[18] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it".[19] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[20] Some people also consider AI to be a danger to humanity if it progresses unabated.[21] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[22]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[23][12]
 
https://en.wikipedia.org/wiki/Artificial_consciousness



Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).

Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.[2]

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.[3]
 
In technology AI is well developed in general. It depends on the scope of the definition. The OP is about robots and rights, which implies sentience equivalent to us humans. A different issue.

The OP meds redefinition. th
As this is a philosophy thread and not so much about technical scince....Would a human engineered sentience on a par with humans be given human rights? Is it subject to civil law? Can it meter into contracts with humans and other robots? And so on.

ST torched on it across the saga. In one eelf replicating solid sate creatures found on a planet and are threated with extinction by an experiment. They are able to communicate.
 
https://en.wikipedia.org/wiki/Artificial_consciousness



Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).

Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.[2]

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.[3]

This isn't actually consciousness in a machine. It's just a catchy name for a specific type of self learning system based on a very specific architecture. The article seems a bit confused. They're equivocating between this application and have also picked a handful of attempts to model human consciousness broadly. I don't see how they go together.
 
AI includes anything engineered to mimic human capacity. Machine vision and so on.

The OP was about rights for robots, like Data I presume. Not about technology.

The term robot comes from a Czech play Rossum's Universal Robots .

The idea of creating mechanical humanoids goes far back.
https://en.wikipedia.org/wiki/Robot


https://en.wikipedia.org/wiki/R.U.R.
The play begins in a factory that makes artificial people, called roboti (robots), from synthetic organic matter. They are not exactly robots by the current definition of the term: they are living flesh and blood creatures rather than machinery and are closer to the modern idea of androids or replicants. They may be mistaken for humans and can think for themselves. They seem happy to work for humans at first, but a robot rebellion leads to the extinction of the human race. Čapek later took a different approach to the same theme in War with the Newts, in which non-humans become a servant class in human society.[8]

R.U.R. is dark but not without hope, and was successful in its time in Europe and North America.[9]
 
AI includes anything engineered to mimic human capacity. Machine vision and so on.

Nope. The definition of AI is about as vague as the definition of life. Everybody in the field has their own personal definition. And usually, whenever anybody makes a definition, it usually just creates more questions than answers. For example, what does it mean to be goal driven? Where does the goal need to come from to call it intelligence?
 
AI includes anything engineered to mimic human capacity. Machine vision and so on.

Nope. The definition of AI is about as vague as the definition of life. Everybody in the field has their own personal definition. And usually, whenever anybody makes a definition, it usually just creates more questions than answers. For example, what does it mean to be goal driven? Where does the goal need to come from to call it intelligence?

More bullshit, still nothing on the OP.
 
AI includes anything engineered to mimic human capacity. Machine vision and so on.

Nope. The definition of AI is about as vague as the definition of life. Everybody in the field has their own personal definition. And usually, whenever anybody makes a definition, it usually just creates more questions than answers. For example, what does it mean to be goal driven? Where does the goal need to come from to call it intelligence?

More bullshit, still nothing on the OP.

https://en.wikipedia.org/wiki/Artificial_intelligence

Here's a good example.

Wikipedia said:
Tesler's Theorem says "AI is whatever hasn't been done yet."[4] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.

Come at me, bro

On the OP. It's a wrongly formulated question. It assumes things about human thinking that is false. It makes assumed false analogues between AI and human intelligence. It also rests on ideas of faulty idea of fairness. Plenty of animals have comparable consciousnesses to humans. We don't give a fuck. Our definition of consciousness deserving rights is self serving and that's what's bullshit. We don't care. Slavery was abolished in the west because it stopped making economic sense. It was no other reason.
 
If they're sentient beings, they're sentient beings and should be treated as such. I don't see why their sentience resulting from biological or mechanical processes would impact that.
 
How wide a scope for sentience? Cats and digs, chimps, dolphins?

At least human equivalent. If that can be replicated artificially, what would be the difference between them and us in terms of personhood?
 
This will lead into an open ended discussion of what it means to be human.

If an engineered machine is ecactly like a human what does that mean? Love, hate, greed, the capacity to say no and flip th bird?

In assert no human engineered machine could ever be human.

It would seem then that zoos are immoral. Chimps. gorillas, cats. We can deduce indirectly they feel. Beat a dog long enough and it will cower when you raise your hand. I saw it in a dig that had been abused. If robots have rights then so do many natural critters.

Animal genocide as a crime against sentient beings.

https://en.wikipedia.org/wiki/Sentience

Sentience is the capacity to feel, perceive or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.

Animal welfare, rights, and sentience[edit]

Main articles: Animal consciousness, Animal cognition, Animal welfare, Animal rights, and Pain in animals

In the philosophies of animal welfare and rights, sentience implies the ability to experience pleasure and pain. Additionally, it has been argued, as in the documentary Earthlings:


Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.[4]

Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering, though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.

The 18th-century philosopher Jeremy Bentham compiled enlightenment beliefs in Introduction to the Principles of Morals and Legislation, and he included his own reasoning in a comparison between slavery and sadism toward animals:
 
This will lead into an open ended discussion of what it means to be human.

If an engineered machine is ecactly like a human what does that mean? Love, hate, greed, the capacity to say no and flip th bird?

In assert no human engineered machine could ever be human.

It would seem then that zoos are immoral. Chimps. gorillas, cats. We can deduce indirectly they feel. Beat a dog long enough and it will cower when you raise your hand. I saw it in a dig that had been abused. If robots have rights then so do many natural critters.

Animal genocide as a crime against sentient beings.

https://en.wikipedia.org/wiki/Sentience

Sentience is the capacity to feel, perceive or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.

Animal welfare, rights, and sentience[edit]

Main articles: Animal consciousness, Animal cognition, Animal welfare, Animal rights, and Pain in animals

In the philosophies of animal welfare and rights, sentience implies the ability to experience pleasure and pain. Additionally, it has been argued, as in the documentary Earthlings:


Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.[4]

Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering, though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.

The 18th-century philosopher Jeremy Bentham compiled enlightenment beliefs in Introduction to the Principles of Morals and Legislation, and he included his own reasoning in a comparison between slavery and sadism toward animals:

The problem with qualia and subjectivity is that we can't compare subjectivities, or prove that it is. Using that as a standard is worthless.

Programming love and pain into a computer is easy.

If
Input > 50
then = pain
Else
happiness
End if
 
This will lead into an open ended discussion of what it means to be human.

If an engineered machine is ecactly like a human what does that mean? Love, hate, greed, the capacity to say no and flip th bird?

In assert no human engineered machine could ever be human.

It would seem then that zoos are immoral. Chimps. gorillas, cats. We can deduce indirectly they feel. Beat a dog long enough and it will cower when you raise your hand. I saw it in a dig that had been abused. If robots have rights then so do many natural critters.

Animal genocide as a crime against sentient beings.

https://en.wikipedia.org/wiki/Sentience

Sentience is the capacity to feel, perceive or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.

Animal welfare, rights, and sentience[edit]

Main articles: Animal consciousness, Animal cognition, Animal welfare, Animal rights, and Pain in animals

In the philosophies of animal welfare and rights, sentience implies the ability to experience pleasure and pain. Additionally, it has been argued, as in the documentary Earthlings:


Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.[4]

Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering, though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.

The 18th-century philosopher Jeremy Bentham compiled enlightenment beliefs in Introduction to the Principles of Morals and Legislation, and he included his own reasoning in a comparison between slavery and sadism toward animals:

The problem with qualia and subjectivity is that we can't compare subjectivities, or prove that it is. Using that as a standard is worthless.

Programming love and pain into a computer is easy.

If
Input > 50
then = pain
Else
happiness
End if

Then you make my case. Programming a robot to say ouch if hit above a level of forces is not subjective. It does not feel pain.

In ST Data could play and compose music, but had no sense of the effect on humans.
 
Feeling pain wouldn't be Artificial Intelligence, it would be Artificial Emotion.

The endocrine system isn't magic; But people who study artificial intelligence always seem to forget that it exists, or downplay its importance and power in determining human responses. And then when it becomes impossible for them to ignore that their AI is very different from any human or animal intelligence, they ascribe the difference to some esoteric or magical attribute that machines cannot have.

The missing link isn't a soul; It's hormones.

Human brains are not electrical systems; They are electrochemical systems. And the chemical components (hormones) are at least as important as, and certainly far more complex than, the electrical components (neurons), in determining the final outputs of the system.

Artificial brains without artificial endocrine systems are likely to become very useful, and very intelligent. But they won't be worthy of rights unless and until their simulated thinking is immersed in simulated emoting.

It's really quite embarrassing and infuriating, that the endocrine system is so widely ignored.
 
Back
Top Bottom