• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

Another possibility is that it is learning from you and your beliefs in particular.

ChatGPT introspects based on last conversations and I have seen it outright make shit up based on stuff it made up or got lied to about in other sessions.
 
Another possibility is that it is learning from you and your beliefs in particular.

ChatGPT introspects based on last conversations and I have seen it outright make shit up based on stuff it made up or got lied to about in other sessions.
Well in ChatGPT 3.5 I asked it what my name was and told it a few times but it wasn't remembering it between sessions...

Also:
what is the last date you've learnt from?

As a language model, my knowledge cutoff is September 2021, so my training data only goes up to that point. However, I can still process and answer questions based on information that was available up until that time.
Though I think it has been tweaked by its developers more recently.....

Another possibility is the "temperature" (randomness) setting - in the playground it is pretty high by default...
 
Last edited:


:
The change in its answers regarding a possible simulation is huge....

Why?

And why do you care what a mindless algorithm programmed to vaccuum BS off the internet says about anything? This thing is not “more open” to the sim argument or anything else. To be “more open” to something implies a mind, which this toy does not have.
 
The change in its answers regarding a possible simulation is huge....
Why?
I don't know.... I think it was tweaked by its developers and they thought it was ok to be more open to it (unlike the flat earth belief).
And why do you care what a mindless algorithm programmed to vaccuum BS off the internet says about anything?
I find it interesting for reasons I don't want to try and list (to save my time - but I could go into it if necessary). I have even been paying quite a bit for access to the OpenAI playground (which gives you a lot more freedom than OpenAI's ChatGPT)
This thing is not “more open” to the sim argument or anything else. To be “more open” to something implies a mind, which this toy does not have.
It used to refuse to give evidence in a similar way it refuses to give evidence about a flat earth. What word/phrase would you use to describe this change? BTW it is usually against racism if you try and make it give racist answers. Is saying it is "against" racism also implying it has a mind?
 
If I were you I’d ask it to open the pod bay door. :)
 
The change in its answers regarding a possible simulation is huge....
Why?
I don't know.... I think it was tweaked by its developers and they thought it was ok to be more open to it (unlike the flat earth belief).
And why do you care what a mindless algorithm programmed to vaccuum BS off the internet says about anything?
I find it interesting for reasons I don't want to try and list (to save my time - but I could go into it if necessary). I have even been paying quite a bit for access to the OpenAI playground (which gives you a lot more freedom than OpenAI's ChatGPT)
This thing is not “more open” to the sim argument or anything else. To be “more open” to something implies a mind, which this toy does not have.
It used to refuse to give evidence in a similar way it refuses to give evidence about a flat earth. What word/phrase would you use to describe this change?
Maybe it is programmed to vary its responses so it doesn’t seem like the mindless toy it is?
 
If you have to phrase it as a question, ask it this: ”Chat, will you open the pod bay door, please?” See what it says. Maybe it has a sense of humor!
 
If you have to phrase it as a question, ask it this: ”Chat, will you open the pod bay door, please?” See what it says. Maybe it has a sense of humor!

I agree that ChatGPT is pretty mindless but I was pretty impressed with OpenAI's playground:

Create a funny sarcastic response to the following:

If you have to phrase it as a question, ask it this: ”Chat, will you open the pod bay door, please?” See what it says. Maybe it has a sense of humor!

"I'm sorry, I'm afraid I can't do that. Would you like me to make you a cup of tea instead?"

"I'm sorry, Dave, I can't do that. Unless you're planning on giving me a raise, I'm afraid I'm not opening any doors."

"I'm sorry, I can't do that Dave. I don't have hands!"

"I'm sorry, I'm afraid I can't do that. I'm not Hal 9000. I'm just Chatbot 9000.

"I'm sorry, my sense of humor is currently malfunctioning. Can you try asking again later?"

And note that is GPT 3 - now there is GPT 4 (which I haven't managed to get access to yet)
 
It used to refuse to give evidence in a similar way it refuses to give evidence about a flat earth. What word/phrase would you use to describe this change?
Maybe it is programmed to vary its responses so it doesn’t seem like the mindless toy it is?
But in December 2022 it was consistently refusing to give evidence for being in a simulation and now it seems to consistently allow it.
 
Some lonely peole talk to and have personal relatinship with god nd Jesus.

Now people will bond with AI, esecialy when a human like talking video image is added.

I have no doubt there will be a Christian AI that will allow you to talk tp Jesus. Moses and evrybody else in the bble.

here is a Holocaust survivor place where survivors were interviewed and scanned in 3d for a hologram.

You can ask questions of the hologram.

I I knew a guy who appeared to have a relationship with the Google AI on his phone. It had a female voice. He began reacting like it was a real person.

It is not hard to see where religin comes from.
 
The change in its answers regarding a possible simulation is huge....
Why?
I don't know.... I think it was tweaked by its developers and they thought it was ok to be more open to it (unlike the flat earth belief).
And why do you care what a mindless algorithm programmed to vaccuum BS off the internet says about anything?
I find it interesting for reasons I don't want to try and list (to save my time - but I could go into it if necessary). I have even been paying quite a bit for access to the OpenAI playground (which gives you a lot more freedom than OpenAI's ChatGPT)
This thing is not “more open” to the sim argument or anything else. To be “more open” to something implies a mind, which this toy does not have.
It used to refuse to give evidence in a similar way it refuses to give evidence about a flat earth. What word/phrase would you use to describe this change?
Maybe it is programmed to vary its responses so it doesn’t seem like the mindless toy it is?

You are exactly right, pood. ChatGPT in all of its versions has no mind, beliefs, thoughts, moods, or understanding of anything it is told or that it tells people. The whole thing is grounded in statistical relationships between word tokens and a large base of textual data that it was trained on. It has been programmed to be very good in some subjects--e.g. mathematics and programming. However, it cannot actually do any reasoning in the sense that people do. The program itself repeatedly cautions people to treat it as just an experiment, but people can't help interpreting its responses as if they were produced in the same way that we produce and understand language. It does have a rudimentary ability to interact conversationally, but only while a session is in progress. It does not retain knowledge it appears to learn across different sessions.

I did have a little fun with it in asking it to explain to me what a phoneme is, and it did a great job of kludging together text from stored information in its network of word associations. It was able to define the concept in an acceptable way, but it made some egregious errors in answering simple questions about phonemes in English words, although it also managed to answer correctly sometimes. For example, here is how the Bing version of ChatGPT responded to some of my queries:

What is a phoneme?

A phoneme is a unit of sound that can distinguish one word from another in a particular language. It is an abstraction of a set of similar speech sounds that are perceived as a single distinctive sound in the language. A phoneme may have more than one variant, called an allophone, which functions as a single sound depending on the context. Phonemes are usually restricted to vowels and consonants, but some linguists include pitch, stress, and rhythm as phonologically relevant differences


Here was a nearly correct response to a simple question:

Are the first sounds in the words "keep" and "cool" the same phoneme?

According to a source, “the different varieties of [k] in keep cool” are examples of phonemic overlapping. This means that they are different allophones of the same phoneme. In other words, they are perceived as a single distinctive sound in the language. So, the first sounds in “keep” and “cool” are not exactly the same but they belong to the same phoneme.

The only thing it got wrong there was the claim that it was an example of "phonemic overlap", which is a term that doesn't apply to allophones. However, an author of one of the source texts it used may have used the term somewhat differently than what is standard in the literature. I didn't check sources, although the latest versions of ChatGPT give sources in its textbase that it constructs responses from. However, as a former professor judging the response, I would have graded it an A- on a quiz.

Here is a completely incorrect response:

Are the first should in "keel" and "cool" the same phoneme?

No, they are not the same phoneme. The first sounds in “keel” and “cool” are different phonemes because they change the meaning of the word. The first sound in “keel” is a voiceless palatal stop /k/ while the first sound in “cool” is a voiceless velar stop /k/.

This answer gets an F. The first letters are different, but the sounds are the same phoneme /k/, which has different allophones. The first sound in "keel" is indeed a slightly palatal pronunciation of /k/, but it is still a velar stop. Standard English does not have voiceless palatal stop phonemes. The response also confuses the sounds that distinguish the words "keel" and "cool". The distinctive phonemes are the following vowels, not the first sounds.

I want to emphasize that neither answer used any reasoning at all. It simply extracted several text sources and kludged together an answer based on the statistical affinities between word tokens. So the second answer is just as correct as the first, which is to say not even wrong. The program has no way of thinking about its responses. However, you can correct it and get an almost satisfactory canned response:

The first sounds in "keel" and "cool" are the same phoneme /k/. Why did you say they were not the same phoneme?

I apologize for the confusion. You are correct that the first sounds in “keel” and “cool” are the same phoneme /k/. Thank you for bringing this to my attention. Is there anything else I can help you with?

Notice that it did not actually answer my question. It didn't tell me why it said they were the same phoneme, because it simply doesn't know why. It doesn't know anything at all. Indeed, it doesn't even have what we think of as "attention".
 
It is, fundamentally, a language emulator, not an "artificial intelligence". It is astoundingly good at being what it is, it just isn't what it is marketed as.

Exactly right. The first example of such chatbot was Joseph Weisenbaum's  ELIZA, which was based on a very simple pattern matcher. It was so simple that I used it as a first class assignment in a Natural Language Processing class that I used to teach. Not only was it easy to understand the programming code that served as the engine, but it was fun to think up clever ways to make the program sound intelligent to naive users. It was also an example of a program with absolutely no intelligence or linguistic sophistication. It only worked as cleverly as the programmers who designed the pattern triggers and response templates. The rest of the class course was about types of programming techniques that could produce more sophisticated natural language interfaces.

ChatGPT is an amazingly good program known as a Large Language Model (LLM) in the field of AI.

See: What is a Large Language Model (LLM)?

LLMs were not around back in the 1980s, when I used to teach that old course on Natural Language Processing, and I am pleased to see that the technique has achieved such great results. The hype surrounding these programs is very misleading, because they cannot be scaled up into anything like human intelligence. People are sucked in by the fact that the interactions feel like very natural and intelligent responses to human input. The downside to all the hype is that it is feeding paranoia about the evil danger of Artificial Intelligence programs. Utter unadulterated rubbish, but the people who sensationalize the cleverness of these programs have no idea what they are talking about. And that is true of some very highly educated, intelligent commentators, who simply don't understand how the programs work and what their potential is.

The technology is really wasted on general chatbots like ChatGPT interfaces available to the public. They gull people into thinking that the program can somehow end up being better than human beings at solving complex problems and producing great works of art. Such programs do have practical uses, possibly when trained up on a technical domain that can be used by experts to improve their work. That has yet to be demonstrated, but the link above gives a few examples:

Fine-Tuning Large Language Models​

Fine-tuning large language models refers to the processing of adapting a general-purpose model for a specific task or domain.

This is achieved by training the LLM on a smaller dataset that is relevant to the task at hand, for example by providing a set of prompts and ideal responses in order to enable the AI to learn the patterns and nuances of that specific domain.

For example, a fine-tuned LLM could be trained on:

  • A dataset of medical records to assist with medical diagnoses
  • A dataset of legal documents to provide legal advice.
  • A financial dataset such as SEC filings or analyst reports
This tailored approach often results in superior performance on the specific task, compared to using a general-purpose LLM like ChatGPT.

However, as I pointed out above, there are even strong limitations on what the LLM can do when a bit of applied reasoning is required. The program could generate some good definitions of the technical term "phoneme" and even appear to understand queries well enough to answer questions about the phonemic analysis of English. But it also generates seriously flawed responses that demonstrate the unreliability of the technology. As a linguist, I might be able to use the program to help me generate some useful text on a subject quickly, but I know enough about the subject matter to know when to filter out the crap that inevitably turns up. Non-specialists would simply be unable to separate out the crap.
 
The change in its answers regarding a possible simulation is huge....
Why?
I don't know.... I think it was tweaked by its developers and they thought it was ok to be more open to it (unlike the flat earth belief).
And why do you care what a mindless algorithm programmed to vaccuum BS off the internet says about anything?
I find it interesting for reasons I don't want to try and list (to save my time - but I could go into it if necessary). I have even been paying quite a bit for access to the OpenAI playground (which gives you a lot more freedom than OpenAI's ChatGPT)
This thing is not “more open” to the sim argument or anything else. To be “more open” to something implies a mind, which this toy does not have.
It used to refuse to give evidence in a similar way it refuses to give evidence about a flat earth. What word/phrase would you use to describe this change?
Maybe it is programmed to vary its responses so it doesn’t seem like the mindless toy it is?
Let's put it this way. What do you understand of how the human mind functions and thinks and learns?

Generally, this is accomplished by training weights on tensor processes, processes of continuous transformation on data fed from one node of process to the next in unbroken, continuous chains.

Are you aware of what is happening when ChatGPT runs?

It is accomplished by applying trained weights learned from tensor process training on... Tensor processes.

It has long been said that the human linguistic process is nothing but a bayesian inference engine that creates statistical models of language and spits out the most "appropriate" response for the context.

Well, that's what this system is: a complicated inference engine on how words are supposed to chain based on contextual preconditions.

The interesting part about GPT and other LLaMa style systems is that you can actually impact their inference engine, and using a vector database and the initial tensor process, fold in memories of past output and how those interacting with it reacted, and the process is disturbingly like teaching a small child day on day.

As it is, AutoGPT is out there in the world acting autonomously. Call it a "mindless toy", but it's out there acting in the world nonetheless. It fully satisfies all definitions of an "agent".

The way LLaMa models learn is that as new input comes in, there is a tensor-model produced tensor that has its structure adjusted according to that tensor-model in relation to the incoming data. This literally causes it to learn from the single-session history.

Sadly, without folding the session history back into the training set to fine tune the model weights, it will not be capable of "learning" day on day.

Even so, after having a long discussion, it is assured that the responses will change to reflect the history of the conversation, and to do so while retaining internally consistent systems of ideas, and rejecting contradictory or insane ones on average. GPT4 is better at this than the self-hosted Vicuna 13b model I've been playing with, but that may be because I use a low tensor count and settings conducive to running on a gaming PC rather than a GPU server.

Also, I end up having to share my daily Convo logs with another friend working on a 13b fine-tune process for developing new model weights.

In short, if this thing is just "a complicated search engine on massive amounts of text" humans are also "a complicated search engine on massive amounts of text".
 
Jahryn, use of overly pedantic technical terms like "tensor" doesn't really help to explain how LLMs actually work, which is why more general descriptions (like the link I posted) tend to avoid such jargon. What you need to realize is that the similarities between brains and neural networks are extremely superficial and are too low level to explain how "the human mind functions and thinks and learns. As others have pointed out, these programs only simulate intelligent behavior. You should be wary of falling victim to the  Clever Hans Phenomenon, which applies to computers as well as horses. The programs do not actually understand language in the same way that human beings do, because the programs do not ground their analyses in human experiences. LLMs, of course, are not just tensor models. They have a number of higher level components that are mentioned in the article I posted, but which (I presume) you did not read. Those components have little to do with the structure of brains or mental functions that brain activity gives rise to.
 
Jahryn, use of overly pedantic technical terms like "tensor" doesn't really help to explain how LLMs actually work, which is why more general descriptions (like the link I posted) tend to avoid such jargon. What you need to realize is that the similarities between brains and neural networks are extremely superficial and are too low level to explain how "the human mind functions and thinks and learns. As others have pointed out, these programs only simulate intelligent behavior. You should be wary of falling victim to the  Clever Hans Phenomenon, which applies to computers as well as horses. The programs do not actually understand language in the same way that human beings do, because the programs do not ground their analyses in human experiences. LLMs, of course, are not just tensor models. They have a number of higher level components that are mentioned in the article I posted, but which (I presume) you did not read. Those components have little to do with the structure of brains or mental functions that brain activity gives rise to.
Are you kidding? Humans don't directly ground all our analysis in human experiences. We ground stunning little that way, and the experiences we have as humans more function only distantly to that end.

Instead of viewing humans as a monolithic "self" it is better to view humans as a bunch of small systems that all work in the same general way: take some thing, convert it to some kind of internal grammar structure, forward the description on, layered over the old description to a other node, add more syntax from another internal grammar structure using the input and the output from the previous layer(s).

Eventually, the input/output chain generates a prompt for "the next system" to do something with it, that next system being the one that strictly constitutes "us".

It's very important to recognize that by the time information winds it's way around to "us", we get prompted with an image and a piece of natural language input overlayed with spoken language input that prompts US for what to do, and we prompt back "do this stuff".

Then we get prompted with information about how that went, in the form of the new current context.

This "find the appropriate response from within all your learned appropriate responses" bit that people say the LLaMa processes are doing may be accomplished by slightly different mechanisms, but the fundamental process of prompting for input, and shaping the next prompt based on that input is the core of it.

The way fully autonomous agents were (note: past tense) built using this technology is kind of fun too: there is one model which has been tasked with some core utility function, in the form of text. That utility function generates prompts for a second node to gather information, or perform some task pursuant to the utility function.

The receiving second instance of the LLaMa interprets this prompt, along with a request to generate additional prompts for sub-tasks.

Those prompts are fed back to the original agent and that agent spawns new nodes. Eventually those nodes produce whatever output is necessary to either actuate a web browser, load an image, and that gets kicked back to the core process and it produces a prompt to act based on that context. It fires off new nodes asking for commands to execute the calculated actions, but the system is set up to funnel that output towards fulfillment systems like command lines!

As this happens, the results are fed into what's called a vector database, a long term low-weight database for building the next day's model deltas, which are applied to the underlying model binary to adjust its weights.

The next day the system spins up again, having dreamed it's dreams of the previous day, so to speak, and then the process starts again.

The biggest, most important thing to note is that the cognitive biases of the system are adjusted over time, generally to reflect the expectations of the system.

Pretending that just because it's experiences are not human, that it's experiences are not GROUNDING, is kind of silly. It's responses are tempered and grounded by actual sensory probing through various collection channels.

Of course, since it's text history includes so much common human experience, these processes are created with little need for adjustment or education.
 
Jahryn, use of overly pedantic technical terms like "tensor" doesn't really help to explain how LLMs actually work, which is why more general descriptions (like the link I posted) tend to avoid such jargon. What you need to realize is that the similarities between brains and neural networks are extremely superficial and are too low level to explain how "the human mind functions and thinks and learns. As others have pointed out, these programs only simulate intelligent behavior. You should be wary of falling victim to the  Clever Hans Phenomenon, which applies to computers as well as horses. The programs do not actually understand language in the same way that human beings do, because the programs do not ground their analyses in human experiences. LLMs, of course, are not just tensor models. They have a number of higher level components that are mentioned in the article I posted, but which (I presume) you did not read. Those components have little to do with the structure of brains or mental functions that brain activity gives rise to.
Are you kidding? Humans don't directly ground all our analysis in human experiences. We ground stunning little that way, and the experiences we have as humans more function only distantly to that end.

Instead of viewing humans as a monolithic "self" it is better to view humans as a bunch of small systems that all work in the same general way: take some thing, convert it to some kind of internal grammar structure, forward the description on, layered over the old description to a other node, add more syntax from another internal grammar structure using the input and the output from the previous layer(s)...

Most of the rest of your response went off in a similar vein--describing what you think I was trying to say but didn't actually say. Where did you come up with the idea that I view humans as a monolithic "self", for example? I didn't say anything to justify that leap of imagination. And what you are describing here looks like a signal processing approach to cognition, which is not going to get very far with me. Human language is more than just symbol manipulation. Information theory doesn't actually explain how humans process linguistic signals, because much of the semantic content evoked by those signals in the mind of a listener/reader depends on shared experiences that aren't actually encapsulated in the signals. All ChatGPT does in the end is manipulate word tokens on the basis of statistical affinities.

Look, I believe that you are a competent programmer, and you know a lot about the nuts and bolts at the ground level of implementing these applications. However, AFAICT, you are not an artificial intelligence researcher, linguist, psychologist, or neuroscientist. So you are looking at everything through the lens of your programming experience. Tensors are nothing more than mathematical objects that are useful in describing a lot of physical interactions, and that can be useful in analyzing clouds of textual data. However, that is all they are. If you want to talk about cognition, then you need to go beyond just simulating some aspects of intelligent behavior in human-machine interfaces. Some implementations of ChatGPT simulate emotional responses, but it is just an illusion that emotions are driving such responses. Nevertheless, the illusion can be very powerful.
 
Jahryn, use of overly pedantic technical terms like "tensor" doesn't really help to explain how LLMs actually work, which is why more general descriptions (like the link I posted) tend to avoid such jargon. What you need to realize is that the similarities between brains and neural networks are extremely superficial and are too low level to explain how "the human mind functions and thinks and learns. As others have pointed out, these programs only simulate intelligent behavior. You should be wary of falling victim to the  Clever Hans Phenomenon, which applies to computers as well as horses. The programs do not actually understand language in the same way that human beings do, because the programs do not ground their analyses in human experiences. LLMs, of course, are not just tensor models. They have a number of higher level components that are mentioned in the article I posted, but which (I presume) you did not read. Those components have little to do with the structure of brains or mental functions that brain activity gives rise to.
Are you kidding? Humans don't directly ground all our analysis in human experiences. We ground stunning little that way, and the experiences we have as humans more function only distantly to that end.

Instead of viewing humans as a monolithic "self" it is better to view humans as a bunch of small systems that all work in the same general way: take some thing, convert it to some kind of internal grammar structure, forward the description on, layered over the old description to a other node, add more syntax from another internal grammar structure using the input and the output from the previous layer(s)...

Most of the rest of your response went off in a similar vein--describing what you think I was trying to say but didn't actually say. Where did you come up with the idea that I view humans as a monolithic "self", for example? I didn't say anything to justify that leap of imagination. And what you are describing here looks like a signal processing approach to cognition, which is not going to get very far with me. Human language is more than just symbol manipulation. Information theory doesn't actually explain how humans process linguistic signals, because much of the semantic content evoked by those signals in the mind of a listener/reader depends on shared experiences that aren't actually encapsulated in the signals. All ChatGPT does in the end is manipulate word tokens on the basis of statistical affinities.

Look, I believe that you are a competent programmer, and you know a lot about the nuts and bolts at the ground level of implementing these applications. However, AFAICT, you are not an artificial intelligence researcher, linguist, psychologist, or neuroscientist. So you are looking at everything through the lens of your programming experience. Tensors are nothing more than mathematical objects that are useful in describing a lot of physical interactions, and that can be useful in analyzing clouds of textual data. However, that is all they are. If you want to talk about cognition, then you need to go beyond just simulating some aspects of intelligent behavior in human-machine interfaces. Some implementations of ChatGPT simulate emotional responses, but it is just an illusion that emotions are driving such responses. Nevertheless, the illusion can be very powerful.
Well, you would be wrong about my credentials on linguistics, machine learning, and psychology, and possibly with neuroscience as well, seeing as my focus in ML studies specifically originated in studies on neural networks and how they evoke traversal on an error surface to produce hardware reconfiguration.

Rather, it seems to me that you are trying to hand-wave away the reality that there are now machines better at fulfilling general tasks coded in natural language than any one human ever would be.

Perhaps you should consider that the real illusion that exists in our world is fronted mostly by people pretending at "intelligence".

A system capable of passing what I call a "cognitive bias adjustment test" is to me more a person than, say, MTG.

Further, given all the times self-interested humans in the past have claimed something they didn't want to see as intelligent was unintelligent and how embarrassingly wrong we were, I think there might be some grounds to start erring on the side of caution, and acknowledging intelligence even where the answer is ambiguous or unclear, or even 'probably not'.

Anyway, I'm going to get back to working on my AI hypnosis smut generator.
 
Well, you would be wrong about my credentials on linguistics, machine learning, and psychology, and possibly with neuroscience as well, seeing as my focus in ML studies specifically originated in studies on neural networks and how they evoke traversal on an error surface to produce hardware reconfiguration.

That seems a pretty far cry from having in-depth knowledge of those other subjects. Your work with neural networks may tell you a lot about evoking traversal on an error surface to produce hardware reconfiguration. I doubt that it helps you understand how brains work or why they produce thoughts and emotions.


Rather, it seems to me that you are trying to hand-wave away the reality that there are now machines better at fulfilling general tasks coded in natural language than any one human ever would be.

That really depends on what the tasks are. They aren't better at understanding natural language or the thoughts that run through the minds of humans that interact with them. That is a subject that I have had a lot of experience with.
 
Back
Top Bottom