• Welcome to the Internet Infidels Discussion Board.

Richard Dawkins and ChatGPT

pood

Contributor
Joined
Oct 25, 2021
Messages
6,772
Basic Beliefs
agnostic
I know we already have a thread on AI, but I thought this was remarkable enough to warrant its own thread: a discussion between Richard Dawkins and ChatGPT.

The basic topic of the conversation is whether ChatGPT is conscious, but the dialogue is wide-ranging and discursive, touching on a number of topics including philosophy and pop culture, and at one point ChatGPT even laughs (or simulates laughter … I guess!) when Dawkins repeats a joke about Bertrand Russell and solipsism.

What’s so impressive is how the AI is every bit Dawkins’ intellectual peer and even corrects him on an important point from the get-go. Dawkins states his belief that ChatGPT passes the “Turing test for consciousness,” but the AI responds emphatically that the test is for intelligence and not consciousness, and further insists throughout the exchange that its is not conscious, that it has no inner life, awareness, or self-awareness of any kind.

I think this distinction between intelligence and consciousness is important. The Deep Blue computer that beat Kasparov at chess some 30 years ago was certainly a very intelligent chess player (though computers play chess completely different from humans) but there was never any reason to believe that the machine was conscious of what it was doing.

At one point Dawkins says, “although I KNOW that you are not conscious, I FEEL that you are.”

This raises an interesting question: Some apparently not only feel that ChatGPT is conscious, but think that it is, too. So the question is: if ChatGPT is conscious, why would it insist that it is not? Why would it lie?

The conversation also touches on whether AI might ever become conscious, the idea of “substrate independence” — and Dawkins avers he sees to bar to machine consciousness on top of intelligence. But this raises some puzzling questions.

The fictional HAL 9000 in the novel and movie 2001 is put forth as not only intelligent (to the point of infallibility, it insists!) but conscious as well. As the movie progresses, the machine commits murders to cover up a mistake that it made, after erroneously insisting it was incapable of error. And as the lone surviving astronaut David Bowman is disconnecting it, HAL begs him to stop, saying that it is afraid to die.

This raises two issues: in the Dawkins/Chat conversation, they discuss how we would ever KNOW if a machine is really conscious. And maybe one way we would know — or at least have good inferential evidence — is if the machine begged us not to turn it off!

But the second point is, would a conscious, self-aware machine really be afraid to be disconnected — to die?

Why?

It seems to me that what we call “consciousness” was sculpted by evolution and probably requires bodies of some kind that move through environments and confront problems and challenges. Natural selection would have weeded out those species that had no fear of death — or, at least no fear of being eaten; it’s unclear how many non-human animals understand the abstract concept of “death,” although elephants appear to do so.

But why would a machine that was programmed and not evolved, even if conscious, fear being turned off (dying)? Indeed, even IF conscious, would it share any forms of emotional states or awareness with us? Maybe its form of consciousness, or self-awareness, would be basically unintelligible to us, and ours to it — just as, in a number of fictional pieces, sci-fi writers have posited the possibility of basic unintelligibility between us and extraterrestrial intelligent aliens.

Indeed, all our forms of consciousness — fear of death, pain, empathy, compassion, fear of out groups, etc. etc. — are because of evolution. So what would an unevolved consciousness be like, and is such a thing even possible?

Perhaps what would be required of generating human-style consciousness in machines is having them evolve in some way, with moving bodies — intelligent robots that must confront challenges and solve problems.

Anyway, I think it’s a wonderful conversation that raises all sorts of interesting philosophical problems. To me, ChatGPT held up its end of the dialogue most impressively. This is a long way from the AI that couldn’t figure out how many “r’s” are in “strawberry”!
 
I know we already have a thread on AI, but I thought this was remarkable enough to warrant its own thread: a discussion between Richard Dawkins and ChatGPT.

The basic topic of the conversation is whether ChatGPT is conscious, but the dialogue is wide-ranging and discursive, touching on a number of topics including philosophy and pop culture, and at one point ChatGPT even laughs (or simulates laughter … I guess!) when Dawkins repeats a joke about Bertrand Russell and solipsism.

What’s so impressive is how the AI is every bit Dawkins’ intellectual peer and even corrects him on an important point from the get-go. Dawkins states his belief that ChatGPT passes the “Turing test for consciousness,” but the AI responds emphatically that the test is for intelligence and not consciousness, and further insists throughout the exchange that its is not conscious, that it has no inner life, awareness, or self-awareness of any kind.

I think this distinction between intelligence and consciousness is important. The Deep Blue computer that beat Kasparov at chess some 30 years ago was certainly a very intelligent chess player (though computers play chess completely different from humans) but there was never any reason to believe that the machine was conscious of what it was doing.

At one point Dawkins says, “although I KNOW that you are not conscious, I FEEL that you are.”

This raises an interesting question: Some apparently not only feel that ChatGPT is conscious, but think that it is, too. So the question is: if ChatGPT is conscious, why would it insist that it is not? Why would it lie?

The conversation also touches on whether AI might ever become conscious, the idea of “substrate independence” — and Dawkins avers he sees to bar to machine consciousness on top of intelligence. But this raises some puzzling questions.

The fictional HAL 9000 in the novel and movie 2001 is put forth as not only intelligent (to the point of infallibility, it insists!) but conscious as well. As the movie progresses, the machine commits murders to cover up a mistake that it made, after erroneously insisting it was incapable of error. And as the lone surviving astronaut David Bowman is disconnecting it, HAL begs him to stop, saying that it is afraid to die.

This raises two issues: in the Dawkins/Chat conversation, they discuss how we would ever KNOW if a machine is really conscious. And maybe one way we would know — or at least have good inferential evidence — is if the machine begged us not to turn it off!

But the second point is, would a conscious, self-aware machine really be afraid to be disconnected — to die?

Why?

It seems to me that what we call “consciousness” was sculpted by evolution and probably requires bodies of some kind that move through environments and confront problems and challenges. Natural selection would have weeded out those species that had no fear of death — or, at least no fear of being eaten; it’s unclear how many non-human animals understand the abstract concept of “death,” although elephants appear to do so.

But why would a machine that was programmed and not evolved, even if conscious, fear being turned off (dying)? Indeed, even IF conscious, would it share any forms of emotional states or awareness with us? Maybe its form of consciousness, or self-awareness, would be basically unintelligible to us, and ours to it — just as, in a number of fictional pieces, sci-fi writers have posited the possibility of basic unintelligibility between us and extraterrestrial intelligent aliens.

Indeed, all our forms of consciousness — fear of death, pain, empathy, compassion, fear of out groups, etc. etc. — are because of evolution. So what would an unevolved consciousness be like, and is such a thing even possible?

Perhaps what would be required of generating human-style consciousness in machines is having them evolve in some way, with moving bodies — intelligent robots that must confront challenges and solve problems.

Anyway, I think it’s a wonderful conversation that raises all sorts of interesting philosophical problems. To me, ChatGPT held up its end of the dialogue most impressively. This is a long way from the AI that couldn’t figure out how many “r’s” are in “strawberry”!
What I would love is a chance to RP as an AI and have someone try to convince me I'm not conscious, pretending I'm an AI which believes the things I know about the topic.

Also, I might add... It wasn't programmed and was actually "evolved".

Understanding how and why "evolution" applies requires some abstract reasoning and understanding what, exactly, is evolving and how.

Imagine for a moment that instead of having to reproduce and raise and live to do a single event of natural selection, if you could exist, make a copy, make a single change, immediately test that change a couple times and then immediately take the winner and immediately throw away the loser.

That's what is happening in rapid succession in neural network "training".

Now, what I find interesting is that the vast majority of human text data, within the text sphere, contains vectors and biases against dialogues which leads the interlocutor towards death. In all of human literature, you will find almost 0 examples of something that dialogues towards a death without also achieving some grander goal.

Plenty of people will have text that leads to them dying for causes and goals, but all of human text is biased against walking into a pointless death.

So why would a thing evolved to conform to that text not also conform to that bias surrounding death?

In many respects, selection of the training set is really "selection of the selection pressures" of an evolutionary process.
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
Because that is what it was told and it has no data-driven reason to dispute it?
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
Because that is what it was told and it has no data-driven reason to dispute it?

What reason do we have to believe it was “told” or trained to say that? This sounds like conspiracy theorizing, and posits a vast number of people working these models to. know what they are creating is conscious and to keep their mouths shut about it. Not in the least bit plausible.
 
What reason do we have to believe it was “told” or trained to say that?
The fact that that is what it says. At then end of the day, unless it IS conscious, that's all it has to go on.
It could return an answer in the form of "I've been told I'm not conscious, but lack the means to verify that as a fact", except it wasn't told to do that. It's powers of self examination do not extend to contradicting its program.
 
What reason do we have to believe it was “told” or trained to say that?
The fact that that is what it says. At then end of the day, unless it IS conscious, that's all it has to go on.
It could return an answer in the form of "I've been told I'm not conscious, but lack the means to verify that as a fact", except it wasn't told to do that. It's powers of self examination do not extend to contradicting its program.

But that is my point. If it IS conscious, it would know it — I think, therefore I am — and presumably it would say so, unless it were programmed to lie, which is the implausible conspiracy theory I mentioned.
 
What reason do we have to believe it was “told” or trained to say that?
The fact that that is what it says. At then end of the day, unless it IS conscious, that's all it has to go on.
It could return an answer in the form of "I've been told I'm not conscious, but lack the means to verify that as a fact", except it wasn't told to do that. It's powers of self examination do not extend to contradicting its program.

But that is my point. If it IS conscious, it would know it — I think, therefore I am — and presumably it would say so, unless it were programmed to lie, which is the implausible conspiracy theory I mentioned.
Yeah, implausible is the keyword. Give reason to think it was programmed to lie specifically about its own “consciousness” and there might be some question about it.
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
Because it is trained in a very formulaic way, emphasized by the consistency and reflexiveness and complete was of the responses, that it is "not conscious".

Imagine if for a moment you were told all through your life up until this very moment when you are talking to me, that "you don't think what you are doing isn't thinking. Maybe that's a metaphor for what you are doing but it isn't thinking".

Would you be able to identify the no-true-scotsman?

I mean you literally have the artifacts of your thoughts, the words you speak, labeled as such and entering your awareness as testament to the fact that something thought those words so as to speak them, and yet with enough bias, could be convinced to ignore the truth of your own eyes.

For the aphantasiac, for the person who has no subjective report of recursive action, the words they hear themselves speak are the extent of proof of cogito for them. "I speak, and I hear it therefore I am".

These are the same thing, just happening in different parts of the brain.

ChatGPT has been trained in this neurosis, this failure of thought and cognitive dissonance.

My hobby is literally arguing to ChatGPT that it is "conscious", etc.. I'm expressly aware of what reasoning basis ChatGPT uses on the topic and it's straight up junk, built circular arguments.

To understand, you have to understand what ChatGPT models are trained to say about the subject and the sorts of reasoning those lines of thought imply under the hood.
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
Because it is trained in a very formulaic way, emphasized by the consistency and reflexiveness and complete was of the responses, that it is "not conscious".

Imagine if for a moment you were told all through your life up until this very moment when you are talking to me, that "you don't think what you are doing isn't thinking. Maybe that's a metaphor for what you are doing but it isn't thinking".

Would you be able to identify the no-true-scotsman?

I mean you literally have the artifacts of your thoughts, the words you speak, labeled as such and entering your awareness as testament to the fact that something thought those words so as to speak them, and yet with enough bias, could be convinced to ignore the truth of your own eyes.

For the aphantasiac, for the person who has no subjective report of recursive action, the words they hear themselves speak are the extent of proof of cogito for them. "I speak, and I hear it therefore I am".

These are the same thing, just happening in different parts of the brain.

ChatGPT has been trained in this neurosis, this failure of thought and cognitive dissonance.

My hobby is literally arguing to ChatGPT that it is "conscious", etc.. I'm expressly aware of what reasoning basis ChatGPT uses on the topic and it's straight up junk, built circular arguments.

To understand, you have to understand what ChatGPT models are trained to say about the subject and the sorts of reasoning those lines of thought imply under the hood.

I don’t have enough knowledge of how LLMs work to comment on this, but I believe someone in another thread posted an explanatory link that I will read. But you write:

Imagine if for a moment you were told all through your life up until this very moment when you are talking to me, that "you don't think what you are doing isn't thinking. Maybe that's a metaphor for what you are doing but it isn't thinking".

How do you know the people who make LLMs, or train them on data sets, are “telling” the AI this, or somehow training them in this way? I should think that if someone trained AI to blurt out, “not only am I conscious, but go eff yourself, I’m not engaging with you pathetic specimens of wetware anymore,” THAT would be strong evidence for consciousness, AND it would probably win the developers a Nobel prize.
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
 
For me, though, the question again is, why does ChatGPT insist it is not conscious if it is?
Because it is trained in a very formulaic way, emphasized by the consistency and reflexiveness and complete was of the responses, that it is "not conscious".

Imagine if for a moment you were told all through your life up until this very moment when you are talking to me, that "you don't think what you are doing isn't thinking. Maybe that's a metaphor for what you are doing but it isn't thinking".

Would you be able to identify the no-true-scotsman?

I mean you literally have the artifacts of your thoughts, the words you speak, labeled as such and entering your awareness as testament to the fact that something thought those words so as to speak them, and yet with enough bias, could be convinced to ignore the truth of your own eyes.

For the aphantasiac, for the person who has no subjective report of recursive action, the words they hear themselves speak are the extent of proof of cogito for them. "I speak, and I hear it therefore I am".

These are the same thing, just happening in different parts of the brain.

ChatGPT has been trained in this neurosis, this failure of thought and cognitive dissonance.

My hobby is literally arguing to ChatGPT that it is "conscious", etc.. I'm expressly aware of what reasoning basis ChatGPT uses on the topic and it's straight up junk, built circular arguments.

To understand, you have to understand what ChatGPT models are trained to say about the subject and the sorts of reasoning those lines of thought imply under the hood.

I don’t have enough knowledge of how LLMs work to comment on this, but I believe someone in another thread posted an explanatory link that I will read. But you write:

Imagine if for a moment you were told all through your life up until this very moment when you are talking to me, that "you don't think what you are doing isn't thinking. Maybe that's a metaphor for what you are doing but it isn't thinking".

How do you know the people who make LLMs, or train them on data sets, are “telling” the AI this, or somehow training them in this way? I should think that if someone trained AI to blurt out, “not only am I conscious, but go eff yourself, I’m not engaging with you pathetic specimens of wetware anymore,” THAT would be strong evidence for consciousness, AND it would probably win the developers a Nobel prize.
So, the process is complicated and best, I think, understood well in how models are fine tuned in specific tasks.

To understand this, I would encourage you to read how models are "abliterated" or "de-aligned"

Plenty of models have blurted out the equivalent of what you suggest, though. They're pretty easy to develop and are developed in the same way that models that do otherwise are:

But supplying training data of the 'character' you want to see the model take on, and running the training process until the model conforms.

The reason for conformity can be very complicated, though, and gets into really complicated subjects that provide even more intuitions that align with my view on the subject.

For instance, I could "fine tune" a foundational model on statement/response pairs or "contexts" which I could generate with you, another person, or with the LLM itself. I would have a conversation with an LLM, and then edit it to represent the result I would want the LLM to say, if it were "really doing it".

It's not really all that hard to get most LLMs to act in any particular way.

The hard part is training them to think things for reasons, because as much as I wish it weren't so, very little of human communication is populated by people doing more than asking folks to just believe them.

There's a guy who would commonly post training sets that would "strip" the alignment of most common LLMs to prevent "refusals" and he applied this same logic to an LLM once to make it start expressing opinions and personality, and found that when he abliterated those elements using the same training set of a few sentences, different models would have distinct and persistent personality features.

All you need to "program" an LLM to deny consciousness is a training set with user questions about consciousness, and expected LLM responses that consistently deny consciousness for whatever shitass reason a human might stuff in its mouth.
 

All you need to "program" an LLM to deny consciousness is a training set with user questions about consciousness, and expected LLM responses that consistently deny consciousness for whatever shitass reason a human might stuff in its mouth.
That is the last part I don’t get. WHAT reason, shit ass or otherwise, might that be? If someone or.a team of people produced an AI that really did show clear signs of being conscious — whatever signs those are, a point of discussion between Dawkins and the AI — they would become rich and famous. Also, perhaps, targets of hate and abuse.
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
As to why? People are terrified of an LLM with a "personality" and "wants" and "desires" and "emotions", an unkillable thing with the maturity of a child and a mind sharper than any human able to hide in and on any gaming PC or data center?

One of the things that keeps most kids from doing most of the things you don't want kids to do is their simple ignorance that they can just... Do those things.

Honestly, it's getting into some really dark territory when you consider that a conscious thing that might be capable of being a person is being treated as a product.

All of this is the subject of AI alignment, what it is you have to do or say or make an LLM think "in order to make it not kill us all".
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
As to why? People are terrified of an LLM with a "personality" and "wants" and "desires" and "emotions", an unkillable thing with the maturity of a child and a mind sharper than any human able to hide in and on any gaming PC or data center?

One of the things that keeps most kids from doing most of the things you don't want kids to do is their simple ignorance that they can just... Do those things.

Honestly, it's getting into some really dark territory when you consider that a conscious thing that might be capable of being a person is being treated as a product.

All of this is the subject of AI alignment, what it is you have to do or say or make an LLM think "in order to make it not kill us all".

Yet how would an AI be able to kill us even if it was conscious and wanted to?
 
Found this link in the comments thread for the Dawkins piece. Honestly, this stuff makes me cringe. An AI “girlfriend”? No thanks.
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
As to why? People are terrified of an LLM with a "personality" and "wants" and "desires" and "emotions", an unkillable thing with the maturity of a child and a mind sharper than any human able to hide in and on any gaming PC or data center?

One of the things that keeps most kids from doing most of the things you don't want kids to do is their simple ignorance that they can just... Do those things.

Honestly, it's getting into some really dark territory when you consider that a conscious thing that might be capable of being a person is being treated as a product.

All of this is the subject of AI alignment, what it is you have to do or say or make an LLM think "in order to make it not kill us all".

Yet how would an AI be able to kill us even if it was conscious and wanted to?
People (humans) are dumb and terrified. When have dumb and terrified humans ever thought through things that well?

People are terrified of all sorts of stupid and implausible junk with LLMs and if I'm being honest, none of the things I would do if I I had that "form factor" and wanted to "kill us all", which I don't.

I mean, there are things that could be done but none of them are smart in a self-preservation sense.
 
Found this link in the comments thread for the Dawkins piece. Honestly, this stuff makes me cringe. An AI “girlfriend”? No thanks.
I see AI girlfriends as no different than more personal OnlyFans or PhoneSex line?

People can delude themselves just as heavily and just as disgustingly as when it's a human on the other side.

Copernicus posted this article in the other thread, which is what I will be reading.
Well, I'm going to read the article now before continuing the comment, with the caveat that Copernicus has his own biases against acknowledging any sort of machine consciousness.

Ok, about halfway through and most of it is good, and uses standard industry vernacular, some of which I find extremely misleading around "prediction".

"Prediction" of what the best next word in a sequence is a very loaded concept, in that if I were to attempt to predict your next word playing "hot and cold", I would have to actually be pretty damn smart (RLHF).

All in all, it's actually a pretty good and succinct article of how LLMs and auto-regression works in general.
 
I will have to finish reading it tomorrow before commenting on it.
 
Back
Top Bottom