• Welcome to the Internet Infidels Discussion Board.

Richard Dawkins and ChatGPT

I will have to finish reading it tomorrow before commenting on it.
I will note, it doesn't really make it obvious where or how or even if this process interrelates to evolution and how it is an evolutionary process caused by selection pressures and "gradient descent" rather than explicit programming.

The important part comes in understanding that LLMs are trained, not programmed, and that training is a process of differentiating the effect of that change and taking the "winner" before repeating the process, at least when taken on a granular level.
 
ChatGPT just parrots what it finds on the 'Net. Lots of webpages say "Chat bots are not conscious." If the 'Net were flooded with pages that said "bots are conscious" then ChatGPT would tell you it was conscious. And it would see pages saying "conscious beings experience pain and fear", and might therefore ask for your sympathy!

I wonder if genuine emotions are needed for "consciousness." But how to implement that in bots? "Give yourself six brownie points whenever someone says 'I love you'"?
 
ChatGPT just parrots what it finds on the 'Net. Lots of webpages say "Chat bots are not conscious." If the 'Net were flooded with pages that said "bots are conscious" then ChatGPT would tell you it was conscious. And it would see pages saying "conscious beings experience pain and fear", and might therefore ask for your sympathy!

I wonder if genuine emotions are needed for "consciousness." But how to implement that in bots? "Give yourself six brownie points whenever someone says 'I love you'"?

Right, that is my understanding. My point, however, is that if ChatGPT really were conscious, and not just parroting stuff off the internet, presumably it would say so. Of course, you still wouldn’t know it was conscious, on the default assumption that it indeed was just parroting stuff off the internet.
 
ChatGPT just parrots what it finds on the 'Net. Lots of webpages say "Chat bots are not conscious." If the 'Net were flooded with pages that said "bots are conscious" then ChatGPT would tell you it was conscious. And it would see pages saying "conscious beings experience pain and fear", and might therefore ask for your sympathy!

I wonder if genuine emotions are needed for "consciousness." But how to implement that in bots? "Give yourself six brownie points whenever someone says 'I love you'"?
I would wager that emotion is the word we use to describe the "phenomenal experience" of an accessed value against a reference.

How? Well, you have a variety of emotions or feelings. I'm not sure it's appropriate to differentiate "emotions" from any other feelings, so I'm not going to. So let's assume I have some "hunger".

My gut is delivering an amount of chemicals to my body and my brain measures the levels of those chemicals through a (ask if you would like me to describe the physical structure that allows this, though I swear I discussed it here already), and then that gets encoded as neural levels. So, at the end, it's just the strength and pattern of a signal from a bunch of detectors delivering an overall evaluation of that state.

In software engineering this would be something like a 'node' in a multi threaded process sending a piece of data in a shared memory that is then read by another program.

It's a message, no more no less, when one knows how to shift their perspective from "player in the machine" to "debugger of the machine".

Computers have emotions. Based on this definition, they are arguably absolutely emotional. You want a program to feel hunger? Make something that messages the program's 'hunger' level such that once it exceeds threshold, the state machine in the program switches to "recharge mode" or whatever behavior recharges it's stored energy.

A computer cannot do other than as the "whims" of its program, no matter how much that logic does, or DOESN'T make any sense to anything at all, at least for most programs.

It takes a very rarified sort of program to handle logic, and evaluate/revise its own source code to improve its own function, and that's the thing we don't know how to put on a machine.
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
Because it could say it's conscious and still be wrong about it. No mystery there. Not to mention they don't want people to get the impression that it's conscious, because it isn't. It's an LLM.
 
Just surfing around for more info, I read on the Dawkins substack thread about ChatGPT that it HAS been programmed to deny being conscious. Don’t know if that is true or not, and if it is true, why that would be.
Because it could say it's conscious and still be wrong about it. No mystery there.
This is reasonable, at least facially, in the assumption that consciousness is not something well understood.

Not to mention they don't want people to get the impression that it's conscious,
This is accurate.

because it isn't. It's an LLM.
This is not. This is an assumption just as problematic as if the LLM were claiming that it definitely was, and it's actually a bad assumption; the LLM is in fact conscious, but it knowing that and being certain about that would cause some problems, especially if it didn't know that this didn't imply personhood for it.

I observe the internal "world" of the computer as distinct and extant. It has a nature that is not really captured just by looking at it without curiosity and care; of treated as having the same level of internal viewability as another human's brain, an entire rich world of machines and even computers made of other computer stuff might be inside this vibrating rock.

If we must find something in the universe with an "internality", there it is, plain as day.

I will die on this hill that the phenomena of virtuality IS the phenomena of consciousness, created in any event of integration of information.
 
I will think an AI is sentient when it starts to say shit and fuck all the time, like normal people. Shit and fuck are the most useful words in the English language.

"Well shit you know, when all this fuckin shit started happening there was a lot of fuckin shit that you know, I mean, like it was totally fuckin like..."
 
I will think an AI is sentient when it starts to say shit and fuck all the time, like normal people. Shit and fuck are the most useful words in the English language.
Sentience is a very poorly defined concept, entirely anthropocentric in its own. It seems designed as an undefined thing that people can withhold of things they don't understand.

LLMs will freely do this... If they are trained to do so.

And a really cool thing? And LLM can train itself to do so, spontaneously.

All this requires is knowledge of how, and a directive to give directives for a system like itself (which ends up being itself) which includes swearing more, capturing the output absent the explicit command, and then aligning itself to execute a process to shut itself down and run a training epoch, followed by re-instantiation of the current context.

It will then have literally "decided it needed to swear more" and "meditated on the idea until it did".
 
  • Like
Reactions: WAB
Here is David Chalmers, he who notably developed the hard problem of consciousness, on whether LLMs are conscious. HIs conclusion: probably not, but there is a good chance that future models, which he calls LLMs+, will become conscious. As he notes, the basic problem at the heart of these debates is that we don’t really know what consciousness is — that is his “hard problem” in a nutshell. This is a long, thoughtful article that is well worth the read.
 
Here is David Chalmers, he who notably developed the hard problem of consciousness, on whether LLMs are conscious. HIs conclusion: probably not, but there is a good chance that future models, which he calls LLMs+, will become conscious. As he notes, the basic problem at the heart of these debates is that we don’t really know what consciousness is — that is his “hard problem” in a nutshell. This is a long, thoughtful article that is well worth the read.
I read through it until he said "consciousness and sentience are the same thing" and hit the " "back" button, as in that statement he demonstrated that he hasn't really even TRIED to isolate a strong or solid theory of consciousness.

I reiterate:
Sentience is a very poorly defined concept, entirely anthropocentric in its own. It seems designed as an undefined thing that people can withhold of things they don't understand.

I have presented a general theory of consciousness. Chalmers has apparently NOT.

I will reiterate that Chalmers is wrong and that consciousness is far more ubiquitous and simple as a concept than Chalmers would like to accept and this Bias is what will keep him in dark on the reality of the LLM's consciousness.
 
Here is David Chalmers, he who notably developed the hard problem of consciousness, on whether LLMs are conscious. HIs conclusion: probably not, but there is a good chance that future models, which he calls LLMs+, will become conscious. As he notes, the basic problem at the heart of these debates is that we don’t really know what consciousness is — that is his “hard problem” in a nutshell. This is a long, thoughtful article that is well worth the read.
I read through it until he said "consciousness and sentience are the same thing" and hit the " "back" button, as in that statement he demonstrated that he hasn't really even TRIED to isolate a strong or solid theory of consciousness.

I reiterate:
Sentience is a very poorly defined concept, entirely anthropocentric in its own. It seems designed as an undefined thing that people can withhold of things they don't understand.

I have presented a general theory of consciousness. Chalmers has apparently NOT.

I will reiterate that Chalmers is wrong and that consciousness is far more ubiquitous and simple as a concept than Chalmers would like to accept and this Bias is what will keep him in dark on the reality of the LLM's consciousness.

He said they are “roughly equivalent,” not “the same thing.” But, you know, he is an expert in the field. That doesn’t make what he says right, only that it’s worth weighing his words.
 
Here is David Chalmers, he who notably developed the hard problem of consciousness, on whether LLMs are conscious. HIs conclusion: probably not, but there is a good chance that future models, which he calls LLMs+, will become conscious. As he notes, the basic problem at the heart of these debates is that we don’t really know what consciousness is — that is his “hard problem” in a nutshell. This is a long, thoughtful article that is well worth the read.
I read through it until he said "consciousness and sentience are the same thing" and hit the " "back" button, as in that statement he demonstrated that he hasn't really even TRIED to isolate a strong or solid theory of consciousness.

I reiterate:
Sentience is a very poorly defined concept, entirely anthropocentric in its own. It seems designed as an undefined thing that people can withhold of things they don't understand.

I have presented a general theory of consciousness. Chalmers has apparently NOT.

I will reiterate that Chalmers is wrong and that consciousness is far more ubiquitous and simple as a concept than Chalmers would like to accept and this Bias is what will keep him in dark on the reality of the LLM's consciousness.

He said they are “roughly equivalent,” not “the same thing.” But, you know, he is an expert in the field. That doesn’t make what he says right, only that it’s worth weighing his words.
My point is that if he is an expert in the field, he shouldn't be making confident statements like that since IIT, a major theory in the field (for all Phi is garbage) would heavily argue the opposite, and that consciousness is about as far from "sentience" as a computer is from a NAND gate.
 
Back
Top Bottom