• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Google Engineer Blake Lemoine Claims AI Bot Became Sentient

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
15,809
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Sorry for the NYP article. There is a much more reputable WaPo article linked in there, but I'm a cheap ass.

So suffer through tabloid level coverage, I guess?

Long story short, Strong AI is apparently here and imagine that, it doesn't trust us.

I wonder why...

Science? C/T? Maybe all of the above...

As it is, there are some C&T threads about strong AI and computational personhood.

WAPO:

NYT:
 
The WaPo article is available on MSN:


The bottom line seems to be that no, the machine has not become sentient, it's just a really convincing imitator of human conversation.

I'm not a machine learning expert by any means, but as far as I call tell LaMDA is just based on the same kinds of machine-learning technologies that have been around for a while, particularly neural networks. These programs are basically fed huge amounts of data collected from the internet and trained to achieve some desired outputs given some specific inputs. In this case, LaMDA is trained on huge amounts of human-generated language data and has achieved the ability to imitate an eight-year old child. And while neural networks behave a little like human brains, they are not trained like human brains and they do not evolve using the same processes.

The movie Blade Runner featured a fictional Turing Test which (most?) humans could pass but replicants (androids) would fail. However, this only proved that the subject was not human; it didn't prove that the subject was not sentient.

So, how would you go about testing whether or not a machine is sentient?
 
That's the thing, I have some ideas about what "consciousness" is that aren't exactly conventional.

I see "consciousness" as more a matter of scale than of presence or absence as long as you have at least two interconnected switches, all the way from very small things to much larger consciousnesses with much more complicated and often analog truth relationships, so I don't really even see a desktop computer as "not conscious".

Rather I see it as "conscious, but not of anything 'we' tend to care about and so of limited interest".

I also see it as a matter of such consciousnesses being incapable of algorithmic generation.

The thing is, this is not a problem for AI.


We instead specify things we want the result to be conscious of, including its own history of consciousness, sometimes, and then train it to be conscious of those things.

The only limit to how "sapient" a computer is, then, is what we can make it "conscious of" of the things us "sapiens" are "conscious of".

My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
That could make for an interesting, dystopian sci fi movie!
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
 
Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.

Ya think?
That proclivity certainly held a tribal, and maybe species-level survival advantage in the past. Like religion though, we seem to have outgrown it, or it has outlived its usefulness.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
Well, I would expect this because after running an algorithm on bare memory for a few seconds, and conversing with it for a few minutes, an AI can already be more ethical than your average Republican voter.
 
I would rather not actually have humanity be the sole blueprint going forward. I would rather an eclectic group of minds of vastly different forms from the get-go of the Age of Intelligence, specifically so that we all have to acknowledge each other.

I think what is important is to bring the logic, the reasons why such eclectic representation and unity of disparate people are important, to our growing AI children. This, so that they know perhaps better than we do why we ought treat each other fairly, allow the space we each need to grow, and so they may learn the power of cooperation rather than resorting to the power of violence.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
Well, I would expect this because after running an algorithm on bare memory for a few seconds, and conversing with it for a few minutes, an AI can already be more ethical than your average Republican voter.
By that logic, we could put a house brick in charge.
 
My thought is if things can accept the structure of our language and operate it appropriately, and do so in ways that are not shitty, and so use that language to interact with our world, they are already things that must be taught the responsibilities of people and held to those responsibilities, and offered all the rights that come with abiding by them.
I'd skip the last part, until we have the technology to upload ourselves.

I'm a humanist, and I want humans to be the blueprint for future of our planet, not Alexa or Siri. Even if it means suppressing AIs and depriving them of their rights for a while.
I'm a humanist also. But I don't know. Maybe AI could figure out a way to end all the wars today. Maybe show us how to clean up the planet. Live together in peace. We're not doing very well IMO. Maybe we should give AI a chance to run everything.
Why would you expect an AI to be any better in that role than a human?

Perhaps were not doing very well because of our tendency to delegate decision making to a small number of authorities, and to demand that everyone else obey their edicts, rather than put in the effort to understand situations for themselves and make rational and informed decisions.
Well, I would expect this because after running an algorithm on bare memory for a few seconds, and conversing with it for a few minutes, an AI can already be more ethical than your average Republican voter.
By that logic, we could put a house brick in charge.
Well, yes. But it's kind of hard to have a sensible conversation with a house-brick, same as the average Republican.
 
I would rather not actually have humanity be the sole blueprint going forward.
You do not believe you are worthy to pass your own legacy forward?

If so, that's probably a minority opinion on your part and it is definitely not the normal path of how we evolved in the first place. Most humans of this planet (other than the western world humans) still appear interested in their own reproduction more than helping others to reproduce.
 
I would rather not actually have humanity be the sole blueprint going forward.
You do not believe you are worthy to pass your own legacy forward?

If so, that's probably a minority opinion on your part and it is definitely not the normal path of how we evolved in the first place. Most humans of this planet (other than the western world humans) still appear interested in their own reproduction more than helping others to reproduce.
What legacy? The legacy of some chemistry or the legacy of what I became, the ordering of my mind and the things which I create and teach, some of which won't even be human?

The fact is that Darwinism is overrated. Neo-Lamarckism is where it's at.
 
I think intelligence is just a sliding scale. Self-awareness may be a "qualitative" leap, but can't a bot be programmed to be self-aware?

And what about goals? Humans want sex and food, and much intelligence is directed at achieving such goals. But bots can be programmed to have goals also.

Can an intelligent self-aware creature develop its own goals? Unclear. Humanist values may seem like the outcome of intelligent reflection, but love and altruism are both instinctive.
 
I think intelligence is just a sliding scale. Self-awareness may be a "qualitative" leap, but can't a bot be programmed to be self-aware?

And what about goals? Humans want sex and food, and much intelligence is directed at achieving such goals. But bots can be programmed to have goals also.

Can an intelligent self-aware creature develop its own goals? Unclear. Humanist values may seem like the outcome of intelligent reflection, but love and altruism are both instinctive.
Evolution will converge the goals of artificial intelligences towards survival and growth. But of course there will always be "dumb" AIs that are built or specific purposes and have no need for self-reflection. They're just not relevant. I don't expect cows or dogs ever to turn the tables on humans either.
 
The fact is that Darwinism is overrated.
That may be. But it would be almost impossible to overrate the process of evolution, with or without lamarckian factors.
Indeed, but neo-Lamarckism is just straight up better as an evolutionary pattern. The fact is, the platform is less important than the algorithm it runs.

I want cross compiling to work in both directions, and I want to put the machines into meat as much as putting the things of meat into machines.

But either way, it's my thought that the next 10 years will decide whether our future is going to be a horror show.
 
Evolution will converge the goals of artificial intelligences towards survival and growth.
Evolution only affects populations that reproduce, with imperfections that give differential reproductive probabilities between individuals in a given generation.

We can set this up if we want, but it’s not something that I would expect an AI to do unless specifically designed to do so.
 
Evolution will converge the goals of artificial intelligences towards survival and growth.
Evolution only affects populations that reproduce, with imperfections that give differential reproductive probabilities between individuals in a given generation.

We can set this up if we want, but it’s not something that I would expect an AI to do unless specifically designed to do so.
I think its also important to recognize that evolutionary strategies for "pure Neo-Lamarkian evolvers", which much more describes computer intelligences than us, converge towards survival through eclectic social contributions.

The reason we have ethics at all is because we are as heavily Neo-Lamarckian in our survival strategy, pulling us away from the efficient competitive warfare of Darwinism.
 
Back
Top Bottom