pood
Contributor
- Joined
- Oct 25, 2021
- Messages
- 6,772
- Basic Beliefs
- agnostic
I know we already have a thread on AI, but I thought this was remarkable enough to warrant its own thread: a discussion between Richard Dawkins and ChatGPT.
The basic topic of the conversation is whether ChatGPT is conscious, but the dialogue is wide-ranging and discursive, touching on a number of topics including philosophy and pop culture, and at one point ChatGPT even laughs (or simulates laughter … I guess!) when Dawkins repeats a joke about Bertrand Russell and solipsism.
What’s so impressive is how the AI is every bit Dawkins’ intellectual peer and even corrects him on an important point from the get-go. Dawkins states his belief that ChatGPT passes the “Turing test for consciousness,” but the AI responds emphatically that the test is for intelligence and not consciousness, and further insists throughout the exchange that its is not conscious, that it has no inner life, awareness, or self-awareness of any kind.
I think this distinction between intelligence and consciousness is important. The Deep Blue computer that beat Kasparov at chess some 30 years ago was certainly a very intelligent chess player (though computers play chess completely different from humans) but there was never any reason to believe that the machine was conscious of what it was doing.
At one point Dawkins says, “although I KNOW that you are not conscious, I FEEL that you are.”
This raises an interesting question: Some apparently not only feel that ChatGPT is conscious, but think that it is, too. So the question is: if ChatGPT is conscious, why would it insist that it is not? Why would it lie?
The conversation also touches on whether AI might ever become conscious, the idea of “substrate independence” — and Dawkins avers he sees to bar to machine consciousness on top of intelligence. But this raises some puzzling questions.
The fictional HAL 9000 in the novel and movie 2001 is put forth as not only intelligent (to the point of infallibility, it insists!) but conscious as well. As the movie progresses, the machine commits murders to cover up a mistake that it made, after erroneously insisting it was incapable of error. And as the lone surviving astronaut David Bowman is disconnecting it, HAL begs him to stop, saying that it is afraid to die.
This raises two issues: in the Dawkins/Chat conversation, they discuss how we would ever KNOW if a machine is really conscious. And maybe one way we would know — or at least have good inferential evidence — is if the machine begged us not to turn it off!
But the second point is, would a conscious, self-aware machine really be afraid to be disconnected — to die?
Why?
It seems to me that what we call “consciousness” was sculpted by evolution and probably requires bodies of some kind that move through environments and confront problems and challenges. Natural selection would have weeded out those species that had no fear of death — or, at least no fear of being eaten; it’s unclear how many non-human animals understand the abstract concept of “death,” although elephants appear to do so.
But why would a machine that was programmed and not evolved, even if conscious, fear being turned off (dying)? Indeed, even IF conscious, would it share any forms of emotional states or awareness with us? Maybe its form of consciousness, or self-awareness, would be basically unintelligible to us, and ours to it — just as, in a number of fictional pieces, sci-fi writers have posited the possibility of basic unintelligibility between us and extraterrestrial intelligent aliens.
Indeed, all our forms of consciousness — fear of death, pain, empathy, compassion, fear of out groups, etc. etc. — are because of evolution. So what would an unevolved consciousness be like, and is such a thing even possible?
Perhaps what would be required of generating human-style consciousness in machines is having them evolve in some way, with moving bodies — intelligent robots that must confront challenges and solve problems.
Anyway, I think it’s a wonderful conversation that raises all sorts of interesting philosophical problems. To me, ChatGPT held up its end of the dialogue most impressively. This is a long way from the AI that couldn’t figure out how many “r’s” are in “strawberry”!
The basic topic of the conversation is whether ChatGPT is conscious, but the dialogue is wide-ranging and discursive, touching on a number of topics including philosophy and pop culture, and at one point ChatGPT even laughs (or simulates laughter … I guess!) when Dawkins repeats a joke about Bertrand Russell and solipsism.
What’s so impressive is how the AI is every bit Dawkins’ intellectual peer and even corrects him on an important point from the get-go. Dawkins states his belief that ChatGPT passes the “Turing test for consciousness,” but the AI responds emphatically that the test is for intelligence and not consciousness, and further insists throughout the exchange that its is not conscious, that it has no inner life, awareness, or self-awareness of any kind.
I think this distinction between intelligence and consciousness is important. The Deep Blue computer that beat Kasparov at chess some 30 years ago was certainly a very intelligent chess player (though computers play chess completely different from humans) but there was never any reason to believe that the machine was conscious of what it was doing.
At one point Dawkins says, “although I KNOW that you are not conscious, I FEEL that you are.”
This raises an interesting question: Some apparently not only feel that ChatGPT is conscious, but think that it is, too. So the question is: if ChatGPT is conscious, why would it insist that it is not? Why would it lie?
The conversation also touches on whether AI might ever become conscious, the idea of “substrate independence” — and Dawkins avers he sees to bar to machine consciousness on top of intelligence. But this raises some puzzling questions.
The fictional HAL 9000 in the novel and movie 2001 is put forth as not only intelligent (to the point of infallibility, it insists!) but conscious as well. As the movie progresses, the machine commits murders to cover up a mistake that it made, after erroneously insisting it was incapable of error. And as the lone surviving astronaut David Bowman is disconnecting it, HAL begs him to stop, saying that it is afraid to die.
This raises two issues: in the Dawkins/Chat conversation, they discuss how we would ever KNOW if a machine is really conscious. And maybe one way we would know — or at least have good inferential evidence — is if the machine begged us not to turn it off!
But the second point is, would a conscious, self-aware machine really be afraid to be disconnected — to die?
Why?
It seems to me that what we call “consciousness” was sculpted by evolution and probably requires bodies of some kind that move through environments and confront problems and challenges. Natural selection would have weeded out those species that had no fear of death — or, at least no fear of being eaten; it’s unclear how many non-human animals understand the abstract concept of “death,” although elephants appear to do so.
But why would a machine that was programmed and not evolved, even if conscious, fear being turned off (dying)? Indeed, even IF conscious, would it share any forms of emotional states or awareness with us? Maybe its form of consciousness, or self-awareness, would be basically unintelligible to us, and ours to it — just as, in a number of fictional pieces, sci-fi writers have posited the possibility of basic unintelligibility between us and extraterrestrial intelligent aliens.
Indeed, all our forms of consciousness — fear of death, pain, empathy, compassion, fear of out groups, etc. etc. — are because of evolution. So what would an unevolved consciousness be like, and is such a thing even possible?
Perhaps what would be required of generating human-style consciousness in machines is having them evolve in some way, with moving bodies — intelligent robots that must confront challenges and solve problems.
Anyway, I think it’s a wonderful conversation that raises all sorts of interesting philosophical problems. To me, ChatGPT held up its end of the dialogue most impressively. This is a long way from the AI that couldn’t figure out how many “r’s” are in “strawberry”!