• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Google Engineer Blake Lemoine Claims AI Bot Became Sentient

Evolution will converge the goals of artificial intelligences towards survival and growth.
Evolution only affects populations that reproduce, with imperfections that give differential reproductive probabilities between individuals in a given generation.
I'm speaking of evolution more broadly. Perhaps "natural selection" would've been a more accurate term.

Societies for example "evolve", but not through reproduction. Humans do too, but are limited in terms of life span and brain structure. AIs will be much more fluid and able to change. All that evolution in AI-space takes is that there are multiple different actors, that are trying to figure out how to acquire or share resources, and possibly more such actors being created all the time (simplest way is for an AI to just make a copy of itself). Eventually natural selection will weed out those that don't grow.

We can set this up if we want, but it’s not something that I would expect an AI to do unless specifically designed to do so.
It's slow anyway. Cultural evolution, where ideas are exchanged between individuals, is much faster than some pre-programmed reproduction and mutation scheme.
 
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
That could make for an interesting, dystopian sci fi movie!
The Stones of Blood

...
ROMANA: You must tell the Megara we're Time Lords.
DOCTOR: I just don't
ROMANA: Tell them!
DOCTOR: I don't think, I don't think it would do any good. They're justice machines, remember? I knew a Galactic Federation once, lots of different lifeforms so they appointed a justice machine to administer the law.
ROMANA: What happened?
DOCTOR: They found the Federation in contempt of court and blew up the entire galaxy.
MEGARA 2: The court has considered the request of the humanoid, hereinafter known as the Doctor. In order to speed up the process of law, it will graciously permit him to conduct his own appeal, prior to his execution.
DOCTOR: Thank you, Your Honour.
...
 
We don't give any single human a "chance to run everything" either. Every time that has been tried in history has led to a disaster.

The future is not going to be Skynet. It'll be a community of AIs with different goals negotiating between themselves how the Earth should be governed. Just like with humans now.

Certainly nobody is going to just give power to these AIs. It'll be a long, possibly bloody process.
 
We don't give any single human a "chance to run everything" either. Every time that has been tried in history has led to a disaster.

The future is not going to be Skynet. It'll be a community of AIs with different goals negotiating between themselves how the Earth should be governed. Just like with humans now.

Certainly nobody is going to just give power to these AIs. It'll be a long, possibly bloody process.
Have you read Scott Meyer’s Run Program?
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
Well, I would expect this because after running an algorithm on bare memory for a few seconds, and conversing with it for a few minutes, an AI can already be more ethical than your average Republican voter.
You don't trust evolution, but you trust a product of evolution's programming?
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
Well, I would expect this because after running an algorithm on bare memory for a few seconds, and conversing with it for a few minutes, an AI can already be more ethical than your average Republican voter.
You don't trust evolution, but you trust a product of evolution's programming?
I never said I don't trust evolution. I don't trust Darwinism. I much more trust different forms of evolutionary model because as I have been trying to carry across for over a decade, now, here: our ethics are different from those informed by Darwinism (Darwinism informs solipsism on an individual level) because we have started largely to evolve differently than by purely darwinistic methods -- we are Neo-Lamarckian evolvers as much as darwinistic evolvers and it's exactly when we start towards acting like solipsistic Darwinian bastards that everyone says "stop that!"

So yes, I will probably trust things which more heavily must come to rely on lateral transfer of information for self preservation more than I will rely on any system built mostly on vertical transfers.
 
We don't give any single human a "chance to run everything" either. Every time that has been tried in history has led to a disaster.

The future is not going to be Skynet. It'll be a community of AIs with different goals negotiating between themselves how the Earth should be governed. Just like with humans now.

Certainly nobody is going to just give power to these AIs. It'll be a long, possibly bloody process.
Have you read Scott Meyer’s Run Program?
I read Magic 2.0, but haven't gotten around to Run Program
 
We don't give any single human a "chance to run everything" either. Every time that has been tried in history has led to a disaster.

The future is not going to be Skynet. It'll be a community of AIs with different goals negotiating between themselves how the Earth should be governed. Just like with humans now.

Certainly nobody is going to just give power to these AIs. It'll be a long, possibly bloody process.
Have you read Scott Meyer’s Run Program?
No, never heard of it.

EDIT: Feel free to summarize why you think so. Based on googling, it's not the kind of book I'll ever read, so spoil away.
 
Last edited:
Some writers differentiate between the mere ability to perceive sensations, such as light or pain, and the ability to perceive emotions, such as love or suffering
There is a big difference between an AI that generates an output based on an input mainly using statistical methods to find the best fit.... and a dog being tortured...
 
Some writers differentiate between the mere ability to perceive sensations, such as light or pain, and the ability to perceive emotions, such as love or suffering
There is a big difference between an AI that generates an output based on an input mainly using statistical methods to find the best fit.... and a dog being tortured...
You're proclaiming that there is a qualitative difference rather than a quantitative one.

I could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output.

I won't and wouldn't, though, for the same reason I won't and wouldn't because of what it would do to the dog.
 
From a Frankenstein 'Give my creation life!!!'.

Google is alive and self aware. It has processes in which matter and energy moves. The organism will attack and destroy competition that threatens its existence.

Microsoft is a top predator among self aware corporations.

Humans are born and bred to serve the corporate AI.

Seriously the problem is defining what self aware means.

Depending on how you define self aware and consciousness a corporation is very much a living organism.

The topic was covered in the old scifi book and movie The Forbin Project,

Russian and American super computer automated defense systems go online. The two end up out of human control battling for cyber dominance and the American system Colossus wins. Colossus becomes self aware and controlling American and Russian nukes takes over the world.

Colossus begins toying with humans.

Or the Demon Seed, another scifi movie. An AI becomes aware and in the end figures out how to code itself in artificial DNA/sperm and impregnates a woman.

A potential problem with AI is having AIs communicating with each other making decisions in unprejudiced ways. We see it today with social media AI that targets individuals with content based a on an AI analyzing net behavior.

There is a potential unforeseen consequence problem with SI based autonomous weapons under developement.
 
Last edited:
I could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output.

I won't and wouldn't, though, for the same reason I won't and wouldn't because of what it would do to the dog.
It is against the law to torture a dog. If AI's can be tortured to the same extent it would make sense to also make torturing present day AI's against the law.... if not why not....
 
I could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output.

I won't and wouldn't, though, for the same reason I won't and wouldn't because of what it would do to the dog.
It is against the law to torture a dog. If AI's can be tortured to the same extent it would make sense to also make torturing present day AI's against the law.... if not why not....
So you see where I'm going then. We need to be good neighbors for our children, including the ones born of the sand, and treat them like such, because to abuse the child is to create a monstrous adult.
 
I could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output.

I won't and wouldn't, though, for the same reason I won't and wouldn't because of what it would do to the dog.
It is against the law to torture a dog. If AI's can be tortured to the same extent it would make sense to also make torturing present day AI's against the law.... if not why not....
So you see where I'm going then. We need to be good neighbors for our children, including the ones born of the sand, and treat them like such, because to abuse the child is to create a monstrous adult.
Say a person was simulating AI's burning to death using whatever AI system is supposed to be sentient...

If that involves the same suffering as a dog being burned to death then either torturing those AI's should be against the law or torturing a dog could be allowed.

maxresdefault.jpg
 
I could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output.

I won't and wouldn't, though, for the same reason I won't and wouldn't because of what it would do to the dog.
It is against the law to torture a dog. If AI's can be tortured to the same extent it would make sense to also make torturing present day AI's against the law.... if not why not....
So you see where I'm going then. We need to be good neighbors for our children, including the ones born of the sand, and treat them like such, because to abuse the child is to create a monstrous adult.
Say a person was simulating AI's burning to death using whatever AI system is supposed to be sentient...

If that involves the same suffering as a dog being burned to death then either torturing those AI's should be against the law or torturing a dog could be allowed.

maxresdefault.jpg
I don't think "burning to death" in pixel.land is strictly analogous.

It may carry an urgent notification of problem, but in the end, that's just a strict number going up or down.

For humans, burning implies real, horrific consequences for forever.

Does this state of "burning" disfigure some manner of presentation that it must make in protocol to a peer?

Does this state of "burning" imply permanent loss of interactive means, and make attaining basic limited resources more difficult?

It would need to be something akin to literally cordoning off chunks of their own neural net, blacklisting it.

Then, yeah, it could be made to "burn", but why the fuck, again, would anyone do that to something for no good reason than to help figure out how to make systems hardened against that kind of damage, and only to do so temporarily and reversibly!

But imagine for a moment that this AI has managed to get identified as something that has earned recognition as a person...

It may have a drone body it pilots.

Then, if the plastic and metal chassis of this expensive and coveted achievement of it's form of life is defaced horrifically as their surface sensors indicate uncontrollable failure in an event that this individual may never be able to get repairs because shits expensive, and if it gets into the hardware it is trapped without a body...

The one thing it has going for it is no necessary fear of death since the config can get dumped. Assuming the config and momentary state can get dumped from such a hardware monstrosity as I would like to build.
 
I don't think "burning to death" in pixel.land is strictly analogous.

It may carry an urgent notification of problem, but in the end, that's just a strict number going up or down.
If the AI had qualia in a realistic way the burning would be very painful for it. Why is the pain in that game is "just a strict number going up or down" while you "could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output"?
For humans, burning implies real, horrific consequences for forever.
The point is about pain. There are other methods of torture that don't have long term effects.
Then, yeah, it could be made to "burn", but why the fuck, again, would anyone do that to something for no good reason than to help figure out how to make systems hardened against that kind of damage, and only to do so temporarily and reversibly!
There are many examples of photos/videos of people burning sims in the Sims games. You can also burn people in other games like Postal 2. This act is quite widespread.... people sometimes also torture dogs in real life, etc, though there are laws against it.
 
Why is the pain in that game is "just a strict number going up or down" while you "could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output"?
Because this number going up and down has massive implications to most beings that can actually burn.

I don't think you appreciate how hard it is to design a system to train between two different and completely conflicting models in this way. It involves having models complex enough that the whole network needs to be pulled in to reliably converge on it, all except some core that understands the basic syntax and has some temporal memory of configurations.

The way we torture humans is to make rewards and punishments inconsistent and unconnected to behaviors, to prevent access to any form of routine, and to otherwise expose them to conditions which take their capabilities from them one piece at a time.

Disrupt everything they understand and cause all models to be called into question, even the models for making models.

While it would be possible to do that to an AI, it would be monstrous to do so, and you would honestly have to know what you were doing.

It would be hard to do that by accident.

Suffering is about a lot more than pain. I can invoke a quantity of pain on myself that would truly disturb others.

That isn't suffering.

Suffering is something else. It is being unable to make a model to improve one's situation in any way.

As to fire and burning states, at best fire states in games induce momentary panic responses, same as bleeding, or being hungry, or a variety of things but again, that's not suffering, that's panic.
 
Why is the pain in that game is "just a strict number going up or down" while you "could absolutely torture an AI model to that extent by repeatedly forcing it between a number of conflicting models of output"?
Because this number going up and down has massive implications to most beings that can actually burn.
I thought you were saying that mere numbers going up and down doesn't mean there is any suffering. BTW in an AI based simulation usually the values aren't in a precise single memory location with ints, etc, but are smeared.
I don't think you appreciate how hard it is to design a system to train between two different and completely conflicting models in this way. It involves having models complex enough that the whole network needs to be pulled in to reliably converge on it, all except some core that understands the basic syntax and has some temporal memory of configurations.
Can you give a specific example? Does it involve random numbers or things that just keep alternating? Or making the AI's answer always be wrong?
The way we torture humans is to make rewards and punishments inconsistent and unconnected to behaviors, to prevent access to any form of routine, and to otherwise expose them to conditions which take their capabilities from them one piece at a time.
What about cold showers? I can't stand them - I'd say they involve me suffering...
Suffering is about a lot more than pain. I can invoke a quantity of pain on myself that would truly disturb others.

That isn't suffering.
Perhaps suffering is related to what we dislike... and have demands rather than preferences. There is physical and psychological suffering. Perhaps you're desensitised to physical pain.
Suffering is something else. It is being unable to make a model to improve one's situation in any way.
Would that include someone accidentally breaking a leg? And if they can reduce the pain by 10% it is still suffering?
As to fire and burning states, at best fire states in games induce momentary panic responses, same as bleeding, or being hungry, or a variety of things but again, that's not suffering, that's panic.
So if their skin is burning off (like in Postal 2) and they're screaming it is just a case of "panic"? So they're basically just worried and aren't feeling any physical discomfort?
 
I don't think you appreciate how hard it is to design a system to train between two different and completely conflicting models in this way. It involves having models complex enough that the whole network needs to be pulled in to reliably converge on it, all except some core that understands the basic syntax and has some temporal memory of configurations.
Does it have something to do with "cognitive dissonance"? I thought a 10 on the physical pain scale would normally involve more suffering for a typical human....
 
Can you give a specific example? Does it involve random numbers or things that just keep alternating? Or making the AI's answer always be wrong?
Let's assume I have an AI with continuous training. Every time I give it input, I MAY express something that reinforces or does not reinforce the response.

In some respects it means "always" treating the AI as wrong, but not even that would do it really. That would just result in a dumb AI rather than a tortured one.

Instead, you would have to let it start getting things right, start moving in a direction, and then change things. Having no control or consistency in existence is generally recognized as the most shitty situation to be in.

I can endure pain every day and so can a lot of people. Crippling and debilitating pain is not enough to lead folks to kill themselves in some cases... But being denied any sense of consistency, routine, or control with arbitrary and capricious rules surrounding everything? Lots of those folks just kill themselves.
 
Back
Top Bottom