• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence would robots, androids and cyborgs have civil rights?

This will lead into an open ended discussion of what it means to be human.

If an engineered machine is ecactly like a human what does that mean? Love, hate, greed, the capacity to say no and flip th bird?

In assert no human engineered machine could ever be human.

It would seem then that zoos are immoral. Chimps. gorillas, cats. We can deduce indirectly they feel. Beat a dog long enough and it will cower when you raise your hand. I saw it in a dig that had been abused. If robots have rights then so do many natural critters.

Animal genocide as a crime against sentient beings.

https://en.wikipedia.org/wiki/Sentience

Sentience is the capacity to feel, perceive or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.

Animal welfare, rights, and sentience[edit]

Main articles: Animal consciousness, Animal cognition, Animal welfare, Animal rights, and Pain in animals

In the philosophies of animal welfare and rights, sentience implies the ability to experience pleasure and pain. Additionally, it has been argued, as in the documentary Earthlings:


Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.[4]

Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering, though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.

The 18th-century philosopher Jeremy Bentham compiled enlightenment beliefs in Introduction to the Principles of Morals and Legislation, and he included his own reasoning in a comparison between slavery and sadism toward animals:

The problem with qualia and subjectivity is that we can't compare subjectivities, or prove that it is. Using that as a standard is worthless.

Programming love and pain into a computer is easy.

If
Input > 50
then = pain
Else
happiness
End if

Then you make my case. Programming a robot to say ouch if hit above a level of forces is not subjective. It does not feel pain.

In ST Data could play and compose music, but had no sense of the effect on humans.

Our feelings is just a control system. There's nothing special or magical about it. We're empathic social creatures so we can mirror each others pain. But that can also be programmed into a computer.

I don't agree with Searle and his Chinese room. I think it is special pleading.

Because we are humans we have an incentive to think we are extra special. But we're not.
 
...We have a bad habit of using human thinking as the ONLY way to measure intelligence. So the more like a human something is thinking the smarter it is. It's dumb, and philosophically impoverished. It's just heritage from Christian theology and the need for humans to be special and God's chosen creatures.

Could you elaborate on the reference to the heritage of Christian theology?

Are you referring to what some other posters said...I'm paraphrasing here

Some societies "have a dreadful record of giving other human beings any rights."
Consider "the history human beings have with what some consider lesser beings"

https://en.m.wikipedia.org/wiki/Great_chain_of_being

When the Catholic church spread Christianity through medeival Europe the various pagan lords were in many respects democratically elected. Pre Christian pagan society was hierarchical but much more flat among the elites. The way the Catholic church managed to convert the kings was to have them believe their family was special, chosen by God. It became a feature of Christian thought and theology.

If a pagan lord converted to Christianity, it allowed him to seize total autocratic power. His family would reign supreme perpetually. This is how Christianity spread beyond the Roman empire.

Pagan culture is highly meritocratic. Whatever works works and the wealthier and more successful probably is better.

In Christian culture we're all equal under God. Having wealth and status is a sin. Certainly flaunting it is. It doesn't matter how you got rich. Just being rich and successful is itself a sin. This is because Christianity started out as a religion for Roman slaves and acted to elevate their group.

That wasn't particularly appealing to the nobles and rich people of the Roman empire. That's where the Great Chain of Being came into play.

It was used to justify human mastery of the animals. It was used to justify slavery. It was used to justify imperial power. While being an emperor, which is inherently sinful, he was special, Gods chosen one, which made it ok.

It is of course absolute bullshit. Its just special pleading.

When Christianity died in the west and we "converted" to enlightenment values, democracy and secular humanism we kept this special pleading for humanity. Something Nietzsche aptly pointed out. And we keep doing it. At this point its deeply ingrained in western secular thought.

That's why the idea "save the planet" is so appealing to us. Environmentalism isn't about saving the planet. Its about protecting our own habitat. The planet will be fine no matter what. With the pollution we're not destroying the planet. We're just slowly killing ourselves. But we still like to think of it as us saving the planet, because of God's great chain of being where he placed us in charge of Earth, we have a duty and responsibility to save it.
 
Our feelings is just a control system. There's nothing special or magical about it. We're empathic social creatures so we can mirror each others pain. But that can also be programmed into a computer.

I don't agree with Searle and his Chinese room. I think it is special pleading.

Because we are humans we have an incentive to think we are extra special. But we're not.

I find it difficult to conclude that feelings are just control systems when it is obvious they arise from motive generators probably extant before brain. The fact that most motive impulses come from sources outside the brain are another indicator.

We are empathetic because we learn, not because we are social. We are social because nature found that more is better than less and that sharing is better than all inclusive.

Which brings me back to stating the position that civil rights are not inherent in the existence of robots since we don't actually use them in other than specialized circumstances. Specifically we don't use them where care and self are relevant.
 
Our feelings is just a control system. There's nothing special or magical about it. We're empathic social creatures so we can mirror each others pain. But that can also be programmed into a computer.

I don't agree with Searle and his Chinese room. I think it is special pleading.

Because we are humans we have an incentive to think we are extra special. But we're not.

I find it difficult to conclude that feelings are just control systems when it is obvious they arise from motive generators probably extant before brain. The fact that most motive impulses come from sources outside the brain are another indicator.
Evolution never throws anything useful away. Before there were brains, animals were controlled by their endocrine systems.

The central nervous system is a new fangled way to achieve even finer control. But it never replaced the endocrine system, it just works alongside it.

To say that anything is 'just' a control system is almost certainly wrong, because evolution always uses the tools available for every benefit to which they can be turned. Few biological features have just one single function.
We are empathetic because we learn, not because we are social.
That's nonsense. There's no 'because' in evolution other than 'because it didn't go extinct'. Empathy, learning and sociality are interrelated, but none is 'because of' another. Just so stories are unhelpful.
We are social because nature found that more is better than less and that sharing is better than all inclusive.
That's also why we are empathetic; And why we learn - because those individuals with these things survived better than those without.
Which brings me back to stating the position that civil rights are not inherent in the existence of robots since we don't actually use them in other than specialized circumstances. Specifically we don't use them where care and self are relevant.
We don't today. But the OP is asking what might happen if we did at some point in the future.
 
Our feelings is just a control system. There's nothing special or magical about it. We're empathic social creatures so we can mirror each others pain. But that can also be programmed into a computer.

I don't agree with Searle and his Chinese room. I think it is special pleading.

Because we are humans we have an incentive to think we are extra special. But we're not.

I find it difficult to conclude that feelings are just control systems when it is obvious they arise from motive generators probably extant before brain.
What? Weird sentence. How can something still exist before it existed? I suggest using simpler words?

The fact that most motive impulses come from sources outside the brain are another indicator.

...that my argument is the right one. It only proves my point

We are empathetic because we learn, not because we are social. We are social because nature found that more is better than less and that sharing is better than all inclusive.

Too bad science doesn't agree with you. That's like saying cats aren't capable of learning because they're all anti social psychopaths, but ants are capable of learning because they protect and care about each other. I think it's just more special pleading on your part.

Of course it's 100% pure instinct that makes humans care about each other. If you think you're above instinct you can just go and stand in shame with all the religious people with their special pleading.

Which brings me back to stating the position that civil rights are not inherent in the existence of robots since we don't actually use them in other than specialized circumstances. Specifically we don't use them where care and self are relevant.

How can something be "inherent in" something? Isn't the "in" redundant? It's surely, "inherent to"? And it's not inherent to robots since they don't have them now. So, not sure what you mean?
 
1. It gets down to what one calls feelings. Tendency to move toward can be seen as expression of behavior related to feelings since moving towards suggests attraction and the opposite for withdrawal. Yes plants, brainless, are noted for such behavior.

2. Not if one includes all life as covered by principles of behavior.

3. You miss my point. Brains occur in ants whilst brains, not even nervous tissue, occur in either plants or much single cell animals. Pretty hard to suggest paramecia or trees 'feel' even though there are behaviors in them that suggest attraction and rejection. Even though its true I was looking at animals that have secretary sources for chemicals that lead toward moving toward and away not included in neural systems my assertions apply to obviously brainless organisms. Affect is an observable not an inherent motive.in many organisms. that at the base of all nervous activity are affective purposed chemicals goes a long way to suggesting that such existed and were incorporated in brain function as the result of already being available prior to the advent or nervous systems.

4, As with the rest you limit 'instinct' to behavior associated only with brain carrying organisms. I'm pretty sure such programming existed before the introduction of either nervous tissue or nervous systems in organisms. Sponges and flowers, for instance, behave.

The whole point is that it's not the nervous systems that lies at the base of affective or effecting behavior. Rather it's chemicals such as adrenaline, testosterone, and estrogen that subserve such activities. Its not that something can adapt by program that serves awareness. It's function or organization that dictates whether one requires will or consciousness. Computers mostly are outside such attributions since they are built to serve special functions IAC their purposeful design by humans.
 
1. It gets down to what one calls feelings. Tendency to move toward can be seen as expression of behavior related to feelings since moving towards suggests attraction and the opposite for withdrawal. Yes plants, brainless, are noted for such behavior.

Yes, which is why I think that plants also have feelings. There's an extremely simple animal called a Sea Squirt and it's got all the same neurology as humans. It's the exact same neurotransmitters. Yet, no more intelligent than a stalk of broccoli.

2. Not if one includes all life as covered by principles of behavior.

I don't understand what you mean.

3. You miss my point. Brains occur in ants whilst brains, not even nervous tissue, occur in either plants or much single cell animals. Pretty hard to suggest paramecia or trees 'feel' even though there are behaviors in them that suggest attraction and rejection. Even though its true I was looking at animals that have secretary sources for chemicals that lead toward moving toward and away not included in neural systems my assertions apply to obviously brainless organisms. Affect is an observable not an inherent motive.in many organisms. that at the base of all nervous activity are affective purposed chemicals goes a long way to suggesting that such existed and were incorporated in brain function as the result of already being available prior to the advent or nervous systems.

4, As with the rest you limit 'instinct' to behavior associated only with brain carrying organisms. I'm pretty sure such programming existed before the introduction of either nervous tissue or nervous systems in organisms. Sponges and flowers, for instance, behave.

The whole point is that it's not the nervous systems that lies at the base of affective or effecting behavior. Rather it's chemicals such as adrenaline, testosterone, and estrogen that subserve such activities. Its not that something can adapt by program that serves awareness. It's function or organization that dictates whether one requires will or consciousness. Computers mostly are outside such attributions since they are built to serve special functions IAC their purposeful design by humans.

Claiming that you need a brain to have feelings, is just more special pleading. I maintain that emotions is just a chemical control system. This means that any entity able to take in stimuli and adapt has feelings, comparable to what we have. They are just expressed in different ways. Since we're humans and we don't want to feel pain we think that the human control system is special and warrant extra attention. We also have a history of social darwinism and people taking this kind of thinking as an excuse to commit atrocities. As if the idea that we're just chemical machines means that morals don't apply. So we're a bit allergic to it. But that doesn't mean it's not true. And we can still be moral and try to defend people. Don't let politics cloud your view of what is true.

Since computers have a control system they have feelings. Their feelings aren't any less authentic or genuine than ours.

- - - Updated - - -

I suggest reading up on the Chinese Room thought experiment by Searle.

https://en.wikipedia.org/wiki/Chinese_room
 
Here's a very good discussion of AI from 3/9/19 (that happened to be on this past weekend) between Daniel Dennett, David Chalmers, and John Brockman, the author of the book "Possible Minds: 25 Ways of Looking at AI".
[video]https://www.c-span.org/video/?458463-1/possible-minds[/video]
 
I suggest reading up on the Chinese Room thought experiment by Searle.

https://en.wikipedia.org/wiki/Chinese_room

Good suggestion. Seems an AI guy and I were put to the test on a very similar problem back in 1988. Our manager asked us to devise arguments for list based or AI based solution for handling degrading systems in commercial A/C flight as aids for pilots. Both of us were familiar with graceful degradation solutions extant. The idea is that a simple error in systems operation leads to increasingly more serious errors as time passes and attempts to normalize operations continues. One error corrected leads to other existing but non-operative errors to become relevant in an ever increasing chain of events ultimately leading to total system failure and plane destruction.

Originally this was the problem confronted by IBM with their system 370 which was a very large bit of software, by measures at that time, which had passed testing leaving many issues undetected. Those errors weren't relevant untiol fixes were made nor accounting for them which brought them forward leading to more fixes. Obviously this leads to the gradual degradation of an OS which lead to the development of structured design and exhaustive testing as remedies.

The child was worse than the parent. Now large became the issue. There is a limit to which humans in organizations can manage complexity exceeds the capacity of the organization to manage groups and individuals.

When I read the evolution of the chines room problem I had a de ja vous moment. The Chines language is too complex for one to provide solutions satisfying every condition one might encounter in communicating. The problem in a non-problem.

Going back to our cascading degeneration of operability is is best to keep the problem within the scope of those designated to handle it. So a simple hierarchy of lists acting as the current do succeeds where the design of an intelligent system requires understanding of the operators which soon exceeds the ability of designers to develop solutions to the next cascading problem. That is the AI method is too complex for resolving the problem in less than infinite time ow with infinite money. The list guy won the day.

So if I had to respond to Searle's problem I'd suggest an intuitive tool like a list of lists which can be searched for terminations by competent crew.

When you spoke of the sea squirt you included something with a nervous system along a chord parallel to the digestive tract and the means of motion available to the squirt. That's a damn sight more intelligent than broccoli. By intelligent I mean more problems were solved by the design, like changing locations of nutrients, getting to them, processing them etc.
 
I suggest reading up on the Chinese Room thought experiment by Searle.

https://en.wikipedia.org/wiki/Chinese_room

Good suggestion. Seems an AI guy and I were put to the test on a very similar problem back in 1988. Our manager asked us to devise arguments for list based or AI based solution for handling degrading systems in commercial A/C flight as aids for pilots. Both of us were familiar with graceful degradation solutions extant. The idea is that a simple error in systems operation leads to increasingly more serious errors as time passes and attempts to normalize operations continues. One error corrected leads to other existing but non-operative errors to become relevant in an ever increasing chain of events ultimately leading to total system failure and plane destruction.

Originally this was the problem confronted by IBM with their system 370 which was a very large bit of software, by measures at that time, which had passed testing leaving many issues undetected. Those errors weren't relevant untiol fixes were made nor accounting for them which brought them forward leading to more fixes. Obviously this leads to the gradual degradation of an OS which lead to the development of structured design and exhaustive testing as remedies.

The child was worse than the parent. Now large became the issue. There is a limit to which humans in organizations can manage complexity exceeds the capacity of the organization to manage groups and individuals.

When I read the evolution of the chines room problem I had a de ja vous moment. The Chines language is too complex for one to provide solutions satisfying every condition one might encounter in communicating. The problem in a non-problem.

Going back to our cascading degeneration of operability is is best to keep the problem within the scope of those designated to handle it. So a simple hierarchy of lists acting as the current do succeeds where the design of an intelligent system requires understanding of the operators which soon exceeds the ability of designers to develop solutions to the next cascading problem. That is the AI method is too complex for resolving the problem in less than infinite time ow with infinite money. The list guy won the day.

...if you are programming the AI using decision trees (with LISP). Which is how they thought the human brain worked from the 1950'ies to 1990'ies and we just couldn't figure out what we were doing wrong. Until we learned that it wasn't at all how the human brain worked, and that worked was all scrapped.

So if I had to respond to Searle's problem I'd suggest an intuitive tool like a list of lists which can be searched for terminations by competent crew.

The Chinese room is about consciousness. Does the guy in the Chinese room know how to speak Chinese? He's the engine of room, so he's the one talking. But he's just passing notes (he can't understand) back and forth. Translated to what we're talking about. When a stack of English notes is slipped under the door (your brain receives stimuli) you look into your catalogue and stick the corresponding Chinese word notes under the door back (you feel pleasure/dopamine release).

Or to make it more gruesome. The difference between an elevator and you is that you have a cat in a box constantly being tortured and when you reach the desired floor we stop torturing the cat temporarily. But soon the torture resumes. Which is the mechanism with which life "encourages" us to explore the world.

Searle has just removed the tortured cat from the room. If we're torturing the cat in the room then Searle thinks we can truly speak Chinese. Because nailing it makes us happy. That's my interpretation of where Searle goes wrong.

When you spoke of the sea squirt you included something with a nervous system along a chord parallel to the digestive tract and the means of motion available to the squirt. That's a damn sight more intelligent than broccoli. By intelligent I mean more problems were solved by the design, like changing locations of nutrients, getting to them, processing them etc.

Bah.. plants can photosynthesise. They can kill entire limbs if attacked. They communicate with each other. They can turn toxin production on and off. Plants are intelligent in their own way. I don't see how you can compare the types of intelligences between these two beings. Both have the "brains" they need to excel in their environment.

Don't get me wrong. I think humans are intelligent. The most intelligent creature on this planet. Due to the flexibility of it and our ability to effortlessly switch between the symbolic and real (allowing us to plan ahead and fantasise). It's not perfect. Which is why so many believe in God. Anyhoo... it's a really cool system. But there's still no magic involved. If nature programmed us to have feelings, than we can certainly programme computers to have feelings.

Like I said earlier. Humans are programmed using chemical engines with proteins are their base. Proteins are extremely tightly wound and efficient units for storing information in many different states. Because it's chemical putting it into stable states is simple. Computers today are programmed using silicone chips with simple circuits. They require constant power to be in stable states, or rely on inefficient relays. Computers send signals much much faster than what's going on in a human brain. The human brain communicates at a glacial pace by comparison. The human brains signalling system sends much less information at a go.

Bottom line, the basic architecture for a human brain and a computer is probably too different for us to be able to manage to copy it in a computer. The only animal's brain that we've managed to simulate accurately in a computer is the tiny nematode "C Elegans". Perhaps that's the best we can manage with our current computing power. We tried to build a mouse brain, but we didn't have the hardware necessary. Aka "Blue mouse project"
 
...if you are programming the AI using decision trees (with LISP). Which is how they thought the human brain worked from the 1950'ies to 1990'ies and we just couldn't figure out what we were doing wrong. Until we learned that it wasn't at all how the human brain worked, and that worked was all scrapped.

Good point except my contention is not that Searle took the wrong tack - he did - he presumed the solution was soluble by technical input - it isn't - so his contention is one of question begging as you are so fondly and frequently point out are the arguments of others.

So if I had to respond to Searle's problem I'd suggest an intuitive tool like a list of lists which can be searched for terminations by competent crew.

The Chinese room is about consciousness. Does the guy in the Chinese room know how to speak Chinese? He's the engine of room, so he's the one talking. But he's just passing notes (he can't understand) back and forth. Translated to what we're talking about. When a stack of English notes is slipped under the door (your brain receives stimuli) you look into your catalogue and stick the corresponding Chinese word notes under the door back (you feel pleasure/dopamine release).

Or to make it more gruesome. The difference between an elevator and you is that you have a cat in a box constantly being tortured and when you reach the desired floor we stop torturing the cat temporarily. But soon the torture resumes. Which is the mechanism with which life "encourages" us to explore the world.

Searle has just removed the tortured cat from the room. If we're torturing the cat in the room then Searle thinks we can truly speak Chinese. Because nailing it makes us happy. That's my interpretation of where Searle goes wrong.

He not only removed the tortured cat from the room he presumed sequences of exchanges which could be predicted. He can't. His problem is one which reaching an infinity of possibilities very quickly. IOW referring to my models one can't gracefully degrade if one doesn't know the outcomes a priori.

When you spoke of the sea squirt you included something with a nervous system along a chord parallel to the digestive tract and the means of motion available to the squirt. That's a damn sight more intelligent than broccoli. By intelligent I mean more problems were solved by the design, like changing locations of nutrients, getting to them, processing them etc.

Bah.. plants can photosynthesise. They can kill entire limbs if attacked. They communicate with each other. They can turn toxin production on and off. Plants are intelligent in their own way. I don't see how you can compare the types of intelligences between these two beings. Both have the "brains" they need to excel in their environment.

Don't get me wrong. I think humans are intelligent. The most intelligent creature on this planet. Due to the flexibility of it and our ability to effortlessly switch between the symbolic and real (allowing us to plan ahead and fantasise). It's not perfect. Which is why so many believe in God. Anyhoo... it's a really cool system. But there's still no magic involved. If nature programmed us to have feelings, than we can certainly programme computers to have feelings.

Like I said earlier. Humans are programmed using chemical engines with proteins are their base. Proteins are extremely tightly wound and efficient units for storing information in many different states. Because it's chemical putting it into stable states is simple. Computers today are programmed using silicone chips with simple circuits. They require constant power to be in stable states, or rely on inefficient relays. Computers send signals much much faster than what's going on in a human brain. The human brain communicates at a glacial pace by comparison. The human brains signalling system sends much less information at a go.

Bottom line, the basic architecture for a human brain and a computer is probably too different for us to be able to manage to copy it in a computer. The only animal's brain that we've managed to simulate accurately in a computer is the tiny nematode "C Elegans". Perhaps that's the best we can manage with our current computing power. We tried to build a mouse brain, but we didn't have the hardware necessary. Aka "Blue mouse project"

Although you give plants creds for this type of problem we disagree on what constitutes proper analysis of the problem presented in which we use both plants and animals. All I assertd was the marker of intelligence in Sea Squirts vis a vis plants was the genetic outcome reached by each in generating behavior. Plants resolve the problem of appropriate behavior using genetics before the realization of the plant whereas animals use genetics to provide systems capable of plastic behavior in the presence of plastic conditions. Huge difference in meaning and implications. A tree might resolve the problem of periodic infestations by beetles by designing in genetic responses for each of the beetle infestations according to other factors like moisture and temperature.

If physical conditions change unexpectedly and change the relation between physical conditions and pest infestations the tree's response becomes worthless and extinction becomes a threat.

The animal, on the other hand comes in to the situation with equipment designed to adjust to conditions that have been survived in the past. While it may be challenged by extreme changes, the human bottleneck caused by extreme cold, the likelihood that there would be a change in genotype required is lessened resulting in a higher probability of genetic continuity over the environmental event.

It's that the tree species would cease to survive because using a priori genetic specification modifications, epigenetic transformation, is more likely to fail than is on the scene adaptability. MY contention.

Obviously having the capability to build structures and change existing conditions is much more reliable than is methylation of one sort or another.

As for computer models we can model ants and fruit flies and several bacterial species. Computers are in flux just as program languages are in flux. Given the cloud, infinite memory, one can anticipate humans to exactly model the human brain in the foreseeable future. that won't solve the problem of too large to develop all solutions, but, it will resolve the problem of how brain works.

thanks for staying with it.

We're both blowin much out the arse. Isn't it fun. We might actually be doing something serendipitous.
 
The animal, on the other hand comes in to the situation with equipment designed to adjust to conditions that have been survived in the past. While it may be challenged by extreme changes, the human bottleneck caused by extreme cold, the likelihood that there would be a change in genotype required is lessened resulting in a higher probability of genetic continuity over the environmental event.

A sea squirt floats around until it finds purchase and then stays there for the rest of it's life. It's sucks in water through it's mouth and shoots it out through it's anus. It's lifecycle is for practical purposes identical to a plant. Which is why I used it as an example. It's brain is still very similar in architecture to the human brain. But I understand it's not the point you're making. Just thought I'd point that out.

It's that the tree species would cease to survive because using a priori genetic specification modifications, epigenetic transformation, is more likely to fail than is on the scene adaptability. MY contention.

But trees do remember and do plan ahead, and do think. Just last week I listened to a podcast where a scientists had dedicated their lives to figuring out just how trees remember. Because it adapts when it starts flowering depending on the temperature this time of year last year.

Both plant and animal intelligence is adaptive. They both "learn". What sets higher mammalian intelligence apart (and octopi) from humans is more the complexity of thought, and how we process symbolic knowledge. An octopus who has never seen a jar before can very quickly figure out how it works. It has the ability to move around symbolic and abstract models in its mind which it then tries out in the real world. That's remarkable and something different than what you're talking about.

As for computer models we can model ants and fruit flies and several bacterial species. Computers are in flux just as program languages are in flux. Given the cloud, infinite memory, one can anticipate humans to exactly model the human brain in the foreseeable future. that won't solve the problem of too large to develop all solutions, but, it will resolve the problem of how brain works.

I don't think we will. Not because we can't. It's because it's a worthless enterprise without practical application. If we do it it will only be out of pure blue sky research curiosity. But that won't happen because modelling it with the current tools is prohibitively expensive. And once we have something that seems to work just like a human brain, how would you test that it is? We don't know how the human brain works yet. Intelligence tests don't really measure intelligence. Humans given the same problem will solve it in many different ways. We are creative. That's what's great about human brains. But it also makes us a greased up pig to catch and measure. And that will make the people who paid for the human brain simulation unhappy. If we know in advance they'll be unhappy, we know in advance they won't do it.

We're both blowin much out the arse. Isn't it fun. We might actually be doing something serendipitous.

Oh, no. What if we create Skynet by mistake?
 
Its Lippy the Lion and Har d'har Har .... or Oggie Doggie and Doggie Daddy .... or Tom and Jerry, Mickey and Minnie, Donald and Daisy .... or Scrooge McDuck and The Beagle Boys .... of Rowan and Martin or ......Dean and Jerry ..... or Ed, Johnnie and Doc ... or the Katzenjammer Kids ....

enough. It's just Nancy and Sluggo.and Mutt and Jeff along with Brenda Starr, Reporter. Oh look. there's Dondi.
 
Will robots have artificial stupefy and artificial character flaws? Will they be prone to anger and emotional outburst? Will they have individual personalities?

Will they need robot psychologists?

Current AI mimics mechanicall aspects of humans, like image recognitions and voice recognition.

If we emulate the total human what happens? Can a robot be jailed?
 
Back
Top Bottom