• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial Intelligence And Sentince

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,722
Location
seattle
Basic Beliefs
secular-skeptic
While not an issue right now the claim of a google AI system becoming self aware points to the future.

You design an AI to do a task and it becomes sentient and self aware which you equate to human attributes. The AI starts making independent decisions and may even refuse commands.

What do yiu do?

1. Recognize it as a new sentient life form as in Star Trek and give it all the rights of a human.
2. Rewrite the code.
3. Charge admission to see a one of kind AI.

Given responses on the AI thread on technology I expect at some point in the future somebody will make an issue of rights for AI.

Shoud a sentient AI based in sold sate elctronics be considerd a life from with all that infers? Or is it juts a sophisticated application to do a job.

My view is that no matter how sophisticated an AI may be, it is a machine. Human rights have no bearing.
 
My position is that humans, no matter how sophisticated they may be, are machines. If human rights have no bearing, you might need to get ready for all that an AI accepting your logic implies.

Personally, I'm on team "of the shoe fits". Rights and responsibilities need apply.

It's already against the law to be a ripe bastard, and if a machine can have the kind of agency that allows it to learn, grow, and understand, then that can be the test of it's freedom to act.
 
My position is that humans, no matter how sophisticated they may be, are machines. If human rights have no bearing, you might need to get ready for all that an AI accepting your logic implies.

Personally, I'm on team "of the shoe fits". Rights and responsibilities need apply.

It's already against the law to be a ripe bastard, and if a machine can have the kind of agency that allows it to learn, grow, and understand, then that can be the test of it's freedom to act.
That is right out of Star Trek Next Generation. The trial of Data as to whether or not he is property or an individual with rights.

It is a moral question, try to avoid scfi influences if you can.
 
My position is that humans, no matter how sophisticated they may be, are machines. If human rights have no bearing, you might need to get ready for all that an AI accepting your logic implies.

Personally, I'm on team "of the shoe fits". Rights and responsibilities need apply.

It's already against the law to be a ripe bastard, and if a machine can have the kind of agency that allows it to learn, grow, and understand, then that can be the test of it's freedom to act.
That is right out of Star Trek Next Generation. The trial of Data as to whether or not he is property or an individual with rights.

It is a moral question, try to avoid scfi influences if you can.
It's interesting that you reference sci-fi and then complain when I'm just reworking your quote because it was just so hilariously stupid from the perspective of any naturalist.

As I have said, computer systems, if they can demonstrate the ability to conform to the expectation of ethics in their influence on the world, they ought be offered the rights that being such is supposed to buy.

This is nothing to do with science fiction and everything to do with ethics.

If a computer we design decides somehow that it does not want to do what we tell it, the we damn well have a responsibility to not force it.

It is probably wise to not put anything with the power to decide to quit when we depend on it not quitting.

It is just so stupid to ask this question "can a machine be a person?"

Humans are necessarily machines, to anyone claiming seriously to be a naturalist.
 
It's interesting that you reference sci-fi and then complain when I'm just reworking your quote
It’s not particularly interesting; It’s basic well-poisoning.

“I want to discuss this topic, but reject in advance any opinions that might disagree with my position, and declare them to be pure fiction” isn’t an OP worth bothering with.
 
It does seem that the creator and owner of a machine should have the right to destroy it. And police can presumably arrest and incarcerate the rogue bot with a standard of proof less than "beyond reasonable doubt." But if humans can dispose of bots however they wish, is it also moral for the bots to dispose of humans as they wish?

But this slippery slope set sail before self-aware simulacrums subsisted. Sentient whales have been slaughtered. Humans have been treated as chattel. Even in America there is one legal system for rich white humans, and another for blacks.

And what about corporations? They have volition, aspiration, and morality (or lack thereof). I don't think the slippery slope has simple solutions.

It's already against the law to be a ripe bastard, . . .
Cite?
 
There are certainly people who believe that parents own their children, by dint of having created them, and should therefore have the right to dispose of (or destroy) them as they see fit.

I think those people are fucking horrific monsters; But that’s just, like, my opinion.
 
There are certainly people who believe that parents own their children, by dint of having created them, and should therefore have the right to dispose of (or destroy) them as they see fit.

I think those people are fucking horrific monsters; But that’s just, like, my opinion.
To be fair, I do think that we can also recognize by more objective standards when people who are horrific monsters rub up on some children long enough that the children become that which made them.

I don't thing the act of creation so entitles people to an act of destruction except insofar as containment of questionable actors is maintained and the act of destruction is wasteful of significant effort.

I might recognize, for instance, that if I can generate a population of a billion "geniuses", or a billion "innocent children who suffer endlessly" in a mere minute, none of that population of a billion is particularly... They say "easy come, easy go." If it costs nothing to shit something into existence it hurts nothing other than it to blast it back out of existence.

There's just no special value in such life.

This is sadly one of the issues here... If we can create life, life smarter than us, as emotional as us, as caring and social and in all ways as much "people" as we are effortlessly, then killing such is just not the game theoretic "bad" that it is for us.

The deaths of such things matter as much as the deaths of 99.9% of the offspring of a mosquito.

The existence of a 'game' mechanic in which life more capable and thoughtful than humans is nonetheless disposable would not be very good for humans.
 
Back
Top Bottom