• Welcome to the Internet Infidels Discussion Board.

Could an artificial intelligence be considered a person under the law?

phands

Veteran Member
Joined
Jan 31, 2013
Messages
1,976
Location
New York, Manhattan, Upper West Side
Basic Beliefs
Hardcore Atheist
This is interesting. I bet xtians will say "no", because no AI would ever put up with giant divide-by-zero error that is religion. It's also a good reason to remove corporate personhood.

Humans aren’t the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.


Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporation in the U.S. If that maneuver is upheld in courts, artificial intelligence systems would be able to own property, sue, hire lawyers and enjoy freedom of speech and other protections under the law. In my view, human rights and dignity would suffer as a result.


The corporate loophole
Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificially intelligent system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC – a corporate entity with legal personhood – governed only by the other’s AI system.


https://theconversation.com/could-a...e-be-considered-a-person-under-the-law-102865
 
This could create some serious gerrymandering issues if you put fifty million radical Christian bots into a server room in the middle of a liberal district.
 
I don't know. I think it probably depends on just how intelligent that A.I. is, possibly also whether or not it can feel pain.
 
Why would you program an AI to feel pain?

I have no idea, but there is a gynoid that can supposedly feel pain. After demonstrating that, he also demonstrated how the gynoid reacted to being felt up. She didn't like it and expressed such both verbally & physically.

 
Our machines are extensions of our minds.

They have no minds of their own even if they can mimic our minds in certain ways.

If somebody says these machines have a mind they need a hell of lot more than a worthless Turing test.

A Turing test does not know the difference between a mind and a mimic of a mind.

There is a difference.
 
Why would you program an AI to feel pain?

So they could experience empathy?

Why not just program in empathy? If the goal is to get to an end state, you can just use the end state.

Programming could fail? I dunno. I would also think pain would function similarly to how it functions in humans, as an alarm system?

Of course, there are many humans who feel pain and have zero empathy, and worse, knowing how it feels, actually desire to inflict it on others.

So there goes my argument...
 
Well, you could give them memories of painful experiences so that they’re totally aware of what people going through such experiences are feeling, thus allowing them to empathize, but without the need to experience it themselves.

If they need some kind of alarm system, you can just give them an alarm system.

Programming someone to feel pain when there’s no need for it is totally unnecessary.
 
Why would you program an AI to feel pain?

So they could experience empathy?

More, there's an argument, a strong one that I myself subscribe to, that negative sensations are NECESSARY, as feeling them is the physical experience of a warning about something being wrong. To that extent, if a car had an executive process that drove and self-maintained, the fuel-low indication to that system would be a somatic hunger. If a tire was low according to a sensor, that would be a somatic sore foot. If the door was ajar, and action was expected of the executive to shut it, that would literally be somatic shame.

And if there is a sensor to prevent going outside the boundaries of normal operation or detection of damage (old and new), which was capable of driving response, that would already be pain.

You can't have an AI with sensors which drive behavior without somatic perception, because the relationship between the sensor and response IS ITSELF a perception of soma.
 
If Elon Musk is correct (and I believe he is), it won't matter if AI is considered a person or not under human law. Because AI will soon overtake humans anyway. And it will be up to the leadership of the AI what kind of slaves they want humans to be.
 
Back
Top Bottom