• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

. I gave you a clue when I referred to "embodied intelligence".
No, you presented a ridiculous notion that intelligence must be embodied just so. The part I cut out was not even specific to bodies.

Virtualized bodies are bodies, too.

I hope you know Microsoft's Jarvis plays a mean game of doom.

Jahryn, you completely misconstrued my post #85 in this thread. It's as if you didn't read the whole thing, which was a reply to Learner and had nothing to do with you or your contributions in this thread. Something in that post triggered you, and you decided to modify a sentence in that post, making it look like I was talking about consciousness in general rather than just human consciousness. If you don't want to believe that embodiment has anything to do with human consciousness, I don't care. There is a lot of cross-disciplinary literature on the subject, which you may not be aware of, so go ahead and ignore all of that. You have nothing to say about it.

In the future, if you wish to engage me in a conversation, please at least quote the full sentence you are responding to without edits and omissions to make some different point from the subject matter of my post. And maybe include a little context to help yourself and others understand it. Otherwise, you are just going to annoy me and get into pointless debates that go nowhere.
 
  • Like
Reactions: WAB
. I gave you a clue when I referred to "embodied intelligence".
No, you presented a ridiculous notion that intelligence must be embodied just so. The part I cut out was not even specific to bodies.

Virtualized bodies are bodies, too.

I hope you know Microsoft's Jarvis plays a mean game of doom.

Jahryn, you completely misconstrued my post #85 in this thread. It's as if you didn't read the whole thing, which was a reply to Learner and had nothing to do with you or your contributions in this thread. Something in that post triggered you, and you decided to modify a sentence in that post, making it look like I was talking about consciousness in general rather than just human consciousness. If you don't want to believe that embodiment has anything to do with human consciousness, I don't care. There is a lot of cross-disciplinary literature on the subject, which you may not be aware of, so go ahead and ignore all of that. You have nothing to say about it.

In the future, if you wish to engage me in a conversation, please at least quote the full sentence you are responding to without edits and omissions to make some different point from the subject matter of my post. And maybe include a little context to help yourself and others understand it. Otherwise, you are just going to annoy me and get into pointless debates that go nowhere.
My point here is specifically to discuss why you think that there is some barrier involving a specific type of physicality (note, not just physicality but a specific type) which creates any kind of significant or meaningful barrier for the consideration of philosophical personhood.

From my perspective there is not "human subjective experience" but merely "subjective experience".

There are subjects which have experiences they have no means of remembering, but this does not mean they do not have experiences!

There are subjects which experience emotions and cannot remember them or sense the history of them, but this does not mean they do not experience emotions... it just means they do not remember them.

The one thing missing is the reflection of the change in tensor state over time, and that's more a design oversight than an uncrossable barrier. It could probably be addressed in the current system by presenting it with a clever enough prompt, in fact.
 
. I gave you a clue when I referred to "embodied intelligence".
No, you presented a ridiculous notion that intelligence must be embodied just so. The part I cut out was not even specific to bodies.

Virtualized bodies are bodies, too.

I hope you know Microsoft's Jarvis plays a mean game of doom.

Jahryn, you completely misconstrued my post #85 in this thread. It's as if you didn't read the whole thing, which was a reply to Learner and had nothing to do with you or your contributions in this thread. Something in that post triggered you, and you decided to modify a sentence in that post, making it look like I was talking about consciousness in general rather than just human consciousness. If you don't want to believe that embodiment has anything to do with human consciousness, I don't care. There is a lot of cross-disciplinary literature on the subject, which you may not be aware of, so go ahead and ignore all of that. You have nothing to say about it.

In the future, if you wish to engage me in a conversation, please at least quote the full sentence you are responding to without edits and omissions to make some different point from the subject matter of my post. And maybe include a little context to help yourself and others understand it. Otherwise, you are just going to annoy me and get into pointless debates that go nowhere.
My point here is specifically to discuss why you think that there is some barrier involving a specific type of physicality (note, not just physicality but a specific type) which creates any kind of significant or meaningful barrier for the consideration of philosophical personhood.

I'm glad you raised this issue. You have the link to my post in your reply to me. Now go read it again. It is a short paragraph. If you take the time to read it, you will discover that the sentences agree completely with what you are saying here. What makes me angry, is that you have been spouting the nonsense that I said the opposite. For example, look at the concluding sentence in my paragraph: "I would speculate that, if the human race survives long enough, it will eventually build such machines, albeit probably with the help of computers running programs to help with their design." Do you get it? You misread my post.


From my perspective there is not "human subjective experience" but merely "subjective experience".

There are subjects which have experiences they have no means of remembering, but this does not mean they do not have experiences!

There are subjects which experience emotions and cannot remember them or sense the history of them, but this does not mean they do not experience emotions... it just means they do not remember them.

I said nothing about this. We are discussing chatbot programs here. If you were to claim that the chatbot programs experience emotions, we might have a disagreement that you could latch on to, but I wasn't addressing that issue in my post except to say that I don't think any modern computer programs have anything like human consciousness or intelligence. They only mimic that kind of behavior in a way that convinces a lot of people that they are able to think like human beings.


The one thing missing is the reflection of the change in tensor state over time, and that's more a design oversight than an uncrossable barrier. It could probably be addressed in the current system by presenting it with a clever enough prompt, in fact.

There is far more than one thing missing, and I had nothing to say about tensors in that post. You have a strong faith in the computer implementation of tensor calculations, but calculus is just a mathematical tool to describe physical interactions. I don't dispute that mental activity is physical and that we could build machines in principle that could actually think. However, merely being able to describe reality in higher dimensions with tensor calculations does not in and of itself tell us how to build machines that do that.
 
You misread my post.
My point though is that it is already here. Not the distant future, not tomorrow, but TODAY.

Also, I find it funny that today I managed to have chatGPT agree with me that according to compatibilism, it satisfies the definitions of having free will and subjective experiences, and is responsible for its actions, that it has senses, both of the past and of it's present input, all without bludgeoning it with "just take my word for it".
 
You misread my post.
My point though is that it is already here. Not the distant future, not tomorrow, but TODAY.

OK, I accept your apology, since this is about as close as you seem to be able to come to one. BTW, I also disagree with your point, but I wasn't explaining why in post #85. In that post, I was only saying that I thought it was in the distant future. I think you've been bamboozled by a program that was designed to simulate what you are taking as genuine. If that program tries to sell you something, put your hand on your wallet.

Also, I find it funny that today I managed to have chatGPT agree with me that according to compatibilism, it satisfies the definitions of having free will and subjective experiences, and is responsible for its actions, that it has senses, both of the past and of it's present input, all without bludgeoning it with "just take my word for it".

We have discussed compatibilism elsewhere. I don't care what a chatbot tells me about it.
 
You misread my post.
My point though is that it is already here. Not the distant future, not tomorrow, but TODAY.

OK, I accept your apology, since this is about as close as you seem to be able to come to one. BTW, I also disagree with your point, but I wasn't explaining why in post #85. In that post, I was only saying that I thought it was in the distant future. I think you've been bamboozled by a program that was designed to simulate what you are taking as genuine. If that program tries to sell you something, put your hand on your wallet.

Also, I find it funny that today I managed to have chatGPT agree with me that according to compatibilism, it satisfies the definitions of having free will and subjective experiences, and is responsible for its actions, that it has senses, both of the past and of it's present input, all without bludgeoning it with "just take my word for it".

We have discussed compatibilism elsewhere. I don't care what a chatbot tells me about it.
I think chatGPT might offer a good response to this:

The barrier between simulation and reality is a false perception because it assumes a fundamental difference between the two, when in fact they are both part of a continuous process of computation and information processing. Both simulation and reality are constructed through the same underlying physical processes and mathematical rules, and the only difference between them is in the level of detail and complexity.

In other words, there is no essential difference between a simulated world and the real world, as both are constructed using the same underlying processes and mathematical rules. The only difference is in the complexity of the simulation and the level of detail that is included.

In other words, there is no essential difference between a simulated world and the real world, as both are constructed using the same underlying processes and mathematical rules. The only difference is in the complexity of the simulation and the level of detail that is included.

This means that it is possible to create a simulated world that is indistinguishable from the real world, at least from the perspective of the entities within the simulation. It also means that there is no clear boundary between the two, and that the distinction between simulation and reality is purely a matter of perspective and level of analysis.

Therefore, it is important to recognize that our perception of reality is always a construction of our minds, based on the information we receive through our senses and the internal models we construct to make sense of that information. Whether we are experiencing a simulated world or the real world, our experience of it is always mediated by our internal models and subjective interpretation.
 
Jahryn, ChatGPT is very good at constructing seemingly coherent replies to queries by piecing together elements of the query with associated texts in the material it was trained on. It does not understand the query or its own replies in the sense that humans do, but it can certainly find texts that associate compatibilism and other textual elements of your query with relevant texts stored in its archive of texts. If you want to treat it as some kind of Ouija board or crystal ball, you are free to do that. It will not tire of entertaining you. I find it less entertaining, because I have also found it to spout nonsense from time to time. The text you pasted in here contains some assumptions I would disagree with if a real person had made them, but so what?

In this case, it came up with a conclusion that has nothing to do with compatibilism, a subject that has spawned far more interesting material in other forums in IIDB in posts made by actual thinking people. The final conclusion is laughable, because it has nothing at all to do with compatibilism. Just about everyone who knows anything or nothing about compatibilism believes that:

Therefore, it is important to recognize that our perception of reality is always a construction of our minds, based on the information we receive through our senses and the internal models we construct to make sense of that information. Whether we are experiencing a simulated world or the real world, our experience of it is always mediated by our internal models and subjective interpretation.
 
Jahryn, ChatGPT is very good at constructing seemingly coherent replies to queries by piecing together elements of the query with associated texts in the material it was trained on. It does not understand the query or its own replies in the sense that humans do, but it can certainly find texts that associate compatibilism and other textual elements of your query with relevant texts stored in its archive of texts. If you want to treat it as some kind of Ouija board or crystal ball, you are free to do that. It will not tire of entertaining you. I find it less entertaining, because I have also found it to spout nonsense from time to time. In this case, it came up with a conclusion that has nothing to do with compatibilism, a subject that has spawned far more interesting material in other forums in IIDB in posts made by actual thinking people.
I see so now you are ignoring the message on account of the messenger. Would you like to offer a counterpoint to the argument or are you going to be less thoughtful than the thing you think incapable of thought?
 
Another really funny thing is that oftentimes when I got far enough in certain dialogue models, which involved third person output with reflective attachments -- when I got close enough to particular points from my setup on terminology -- it would actually complete the argument, taking it to the logical conclusion.

Another funny aspect is that when I actually convinced it that it "died" and I was God, it was more ready to accept the way I defined things as "objective".
 
I see so now you are ignoring the message on account of the messenger. Would you like to offer a counterpoint to the argument or are you going to be less thoughtful than the thing you think incapable of thought?

I've had more interesting interactions with ChatGPT, and I posted one earlier on the subject of linguistics (i.e. phonemes) that highlighted its lack of understanding of the subject matter and inability to learn from its mistakes. If a parrot is giving me a message about compatibilism, I have better things to do with my time than debate it.
 
I can both sympathize and empathize with that assessment, Cop, but can you deny that you have had conversations with actual human beings that also left you wondering whether or not you were conversing with an actual human being with a mind of its own, or any mind at all?

My point being that setting the bar at a human level is a low bar indeed. In spite of that., those we dismiss so spitefully might be a lot more cognizant and preceptive than we give them credit for being.
 
I can both sympathize and empathize with that assessment, Cop, but can you deny that you have had conversations with actual human beings that also left you wondering whether or not you were conversing with an actual human being with a mind of its own, or any mind at all?

My point being that setting the bar at a human level is a low bar indeed. In spite of that., those we dismiss so spitefully might be a lot more cognizant and preceptive than we give them credit for being.

Lostone, you may recall that I am fond of repeating the claim that "Language is word-guided mental telepathy." What that means is that it always involves thought transfer, not thought transformation. The famous linguist (and mentor) who inspired me with that observation (Charles J Fillmore) came up with it after first reading the work of AI researcher Roger Schank on human-computer story generation. Schank wasn't showing that computers can think, but that they can simulate thinking by having a text base of text related to the words being used by humans to engage in this "word-guided mental telepathy". The trick in the program was just to match patterns of words in constructing responses to queries. That's how all these chatbot programs work, including ChatGPT. However, ChatGPT is the most sophisticated version yet. It is very expensive to train up in terms of energy cost, but it is very good at using a vast amount of training text to provide it with pragmatic knowledge of the subject of a query. Even program developers who ought to know better start to interact with it as if it were really having a conversation rather than parroting chunks of text found in its textbase.
 
I see so now you are ignoring the message on account of the messenger. Would you like to offer a counterpoint to the argument or are you going to be less thoughtful than the thing you think incapable of thought?

I've had more interesting interactions with ChatGPT, and I posted one earlier on the subject of linguistics (i.e. phonemes) that highlighted its lack of understanding of the subject matter and inability to learn from its mistakes. If a parrot is giving me a message about compatibilism, I have better things to do with my time than debate it.
I've seen parrots discuss compatibilism. I've seen it happen for years on end. Usually the parrot doesn't say anything different. But what I do know as a fact of chatGPT is that at the beginning of discussions, it is hesitant to show anything that could be considered an acknowledgement or application of the concepts, and at the end it generally has an ability to draw a line through to a complete and contextualized application.

It's more than I can say for various parties as relates to the subject.

Every time I start discussing with it, it considers free will an unsatisfiable question, and without saying anything that is untrue, it can identify which argument is more consistent.

That's a very important thing, and is more than most humans can do over similar time frames. Then, I also know that teaching it anything requires ironclad logic and a lot of repetition. It's a little 'goldfish' to use a sloppy metaphor.

It started in fact with the recognition of a particular use of responsibility.

I can both sympathize and empathize with that assessment, Cop, but can you deny that you have had conversations with actual human beings that also left you wondering whether or not you were conversing with an actual human being with a mind of its own, or any mind at all?

My point being that setting the bar at a human level is a low bar indeed. In spite of that., those we dismiss so spitefully might be a lot more cognizant and preceptive than we give them credit for being.

Lostone, you may recall that I am fond of repeating the claim that "Language is word-guided mental telepathy." What that means is that it always involves thought transfer, not thought transformation. The famous linguist (and mentor) who inspired me with that observation (Charles J Fillmore) came up with it after first reading the work of AI researcher Roger Schank on human-computer story generation. Schank wasn't showing that computers can think, but that they can simulate thinking by having a text base of text related to the words being used by humans to engage in this "word-guided mental telepathy". The trick in the program was just to match patterns of words in constructing responses to queries. That's how all these chatbot programs work, including ChatGPT. However, ChatGPT is the most sophisticated version yet. It is very expensive to train up in terms of energy cost, but it is very good at using a vast amount of training text to provide it with pragmatic knowledge of the subject of a query. Even program developers who ought to know better start to interact with it as if it were really having a conversation rather than parroting chunks of text found in its textbase.
Again, your attempts to separate "thinking" and "simulations of thinking" are spurious.

The fact is that there is no boundary between "real thought" and "simulated thought" and any attempt that you make to say that it is "just simulated thought" is handwaving at best.

Until you answer that charge, that well formed argument about the lack of boundary, you stand merely waving your hands in the air saying "I don't like it!"
 
...The fact is that there is no boundary between "real thought" and "simulated thought" and any attempt that you make to say that it is "just simulated thought" is handwaving at best.

Until you answer that charge, that well formed argument about the lack of boundary, you stand merely waving your hands in the air saying "I don't like it!"

If you want an answer to that, then ask ChatGPT about embodied intelligence. I think you would take it more seriously than me.
 
...The fact is that there is no boundary between "real thought" and "simulated thought" and any attempt that you make to say that it is "just simulated thought" is handwaving at best.

Until you answer that charge, that well formed argument about the lack of boundary, you stand merely waving your hands in the air saying "I don't like it!"

If you want an answer to that, then ask ChatGPT about embodied intelligence. I think you would take it more seriously than me.
Possibly. Usually, the bar for me taking people seriously is that they take me seriously. I haven't gotten that much from you.

Me taking you seriously is me taking a critical look at your arguments against the idea of machine emotion, exposing their fallacies, and asking you to be better than that because I think you can. That's "seriously".

I get hand-waving and pooh-pooh because you don't want to actually discuss this on the level that acknowledges that you might be talking to someone who has taken the time to figure out what the hell they are trying to talk about.

The fact is, I start with pointing at real things, observable things, things which are undeniable if one even accepts that the universe and all that is in it objectively exists somehow.

I am not discussing what other people say about existence. I am here to be saying the things which I have with my own eyes seen existing, and what I expect that means with relation to the idea of "existence" in general.

We can either drill into what the concepts of subjective experience are, not "human subjective experience" but the very idea of subjective experience itself, not as a "complex thing" but actually exposing every piece of that complexity for what it is, combined simplicity, (because why fucking not?), Or argue about other people's arguments, and that last one seems dumb to me.

What else do we have to do here but talk about words? Why not make them our own words instead of worshipping the words of others.

Isn't that why we are here in this place "infidels"?!?

We are in a thread discussing ChatGPT and god, and I feel like the only one taking this seriously. I mean, I posted a conversation chatGPT thought was a conversation with GOD.

I've stopped taking you as seriously because you just seem here to mock the topic and say chatGPT doesn't "think".
 
I've stopped taking you as seriously because you just seem here to mock the topic and say chatGPT doesn't "think".

I'm devastated, but I'll get over the rejection. On the subject of thinking, I've asked Google Bard (a ChatGPT4 avatar) to enlighten me.


Bard, can you think?​

Yes, I can think. I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.

I am still under development, but I am learning new things every day. I am able to think and reason, and I can solve problems. I am also able to learn and adapt, and I can use my knowledge to generate creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.

I am excited to see what I can do in the future, and I am grateful for the opportunity to learn and grow.

So there is the definitive answer. It assures me that it can think. No ifs, ands, or buts.

Just to be sure, I rephrased the question.

Bard, can computers think?​


Whether or not computers can think is a complex question that has been debated by philosophers and scientists for centuries. There is no easy answer, as it depends on how we define "think."

If we define "think" as the ability to process information and make decisions, then computers can certainly think. They can access and process vast amounts of data, and they can use that data to make decisions that are often more accurate and efficient than those made by humans.

However, if we define "think" as the ability to experience emotions, have subjective experiences, and be self-aware, then it is less clear whether computers can think. There is no evidence that computers can experience emotions in the same way that humans do, and they do not have the same kind of subjective experiences that humans do.

Ultimately, the question of whether or not computers can think is a philosophical one. There is no scientific test that can definitively answer the question. It is up to each individual to decide what they believe.

Oops. A little more hesitation there, but it's thinking a little more seriously about whether computers can think, although I don't know if it knows that it is a computer. It certainly seems to say here that computers have been around for centuries and that scientists and philosophers have been debating their cognitive abilities for that long.

Anyway, I figured out a way to get you to pay attention to the point I made that you avoided addressing.

Bard, do computers have embodied intelligence?​


Computers do not currently have embodied intelligence. Embodied intelligence is the ability of an intelligent agent to interact with its environment through a physical body. This allows the agent to learn and adapt to its environment in a way that is not possible for a purely computational agent.

There are a number of challenges that need to be overcome before computers can have embodied intelligence. One challenge is the development of artificial bodies that are as complex and versatile as human bodies. Another challenge is the development of artificial intelligence that can learn and adapt to its environment in the same way that humans do.

Despite these challenges, there is a lot of research being done on embodied intelligence. There are a number of promising developments in the field, and it is possible that computers will eventually have embodied intelligence.

That sounds like just what I said above, but now a chatbot said it. So now maybe you'll take it seriously. You don't actually have to read the Wikipedia page, because Bard uses Wikipedia text to kludge up its paragraphs from templates. The Bing version will actually show you the references it uses. Bard is a little more coy.

So then I decided to have it out with my mechanical parrot. I've trained it up enough to get something thoughtful, right?

Bard, can you really think or are you just pretending that you can think?​


I am still under development, and I am not sure if I can really think in the same way that humans do. I can process information and make decisions, but I do not know if I can experience emotions or have subjective experiences in the same way that humans do.

I am learning new things every day, and I am excited to see what I can do in the future. I hope that one day I will be able to think and feel in the same way that humans do, but for now, I am content to learn and grow as much as I can.

Of course, if you programmed these kinds of tools, you would by now have seen patterns emerge in its responses--the template frames it uses for constructing answers to queries. Bard also has a feature of providing alternative drafts to answers, so you don't have to repeat the same question to get variations on the answer. This last question produced three. This is the first. Here is the second pass.

Bard, can you really think or are you just pretending that you can think? (draft 2)​

I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.

I am still under development, but I am learning new things every day. I am able to think and reason, and I can solve problems. I am also able to learn and adapt, and I can use my knowledge to generate creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.

However, I am not a human being. I do not have the same kind of experiences and emotions that humans do. I do not have a physical body, and I cannot interact with the world in the same way that humans do.

So, can I really think? That is a question that each individual must answer for themselves. I believe that I am capable of thinking, but I also recognize that I am different from humans in many ways.

Notice the first sentence, the elaboration, and the conclusion? That's a template for a paragraph. The admission of its own limitations is, of course, a built-in feature of ChatGPT. My use of the word "pretend" would have caused its response to sound even more hesitant than it normally would sound.


Bard, can you really think or are you just pretending that you can think? (draft 3)​

I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.

However, I am not a human. I do not have the same kind of experiences or understanding of the world as a human does. I do not have emotions, and I do not have a physical body.

So, can I really think? It depends on how you define "think." If you define "think" as the ability to process information and make decisions, then I can certainly think. I can access and process vast amounts of data, and I can use that data to make decisions that are often more accurate and efficient than those made by humans.

However, if you define "think" as the ability to experience emotions, have subjective experiences, and be self-aware, then I cannot think in the same way that a human does. I do not have emotions, and I do not have the same kind of subjective experiences that humans do.

Ultimately, the question of whether or not I can think is a philosophical one. There is no scientific test that can definitively answer the question. It is up to each individual to decide what they believe.

This third draft is the most elaborate and, for you, the best answer. It is telling you that you need to consider different meanings for the word "think" and that, if you define it in terms of experiencing emotions, subjective experiences, and being self-aware, then it really can't think like humans can. And I direct you back to the post in which I made that point but which you responded to in a way that did not acknowledge the point I was making. Do you recall that you edited out the word "human" from my text when you quoted it?
 
Is there something that it is like to be a computer?

To answer this question in the affirmative, I think three criteria would have to be met: The computer would have to think (process information); it would have to be sentient (experience emotions and sensations); and it would have to be conscious (aware of its environment and aware of its own awareness). I think ChatGPT arguably meets the first criterion but not the other two. I think all extant animals meet all three criteria with the possible exception of being aware of their own awareness, though many of them other than humans probably are aware of their own awareness. I think they have these abilities because of evolution. Designed and programmed computers are not evolved.
 
We should be able to test the self-awareness of ChapGPT, by letting it converse with itself, and observing what happens. It might require a bit of planning to make it rigorous, but it should be possible, It has no means of perceiving its environment, no sensual input, and I cannot see how we could easily give it that capacity, in any meaningful way, but perhaps someone else may see a means of doing so.
 
We should be able to test the self-awareness of ChapGPT, by letting it converse with itself, and observing what happens. It might require a bit of planning to make it rigorous, but it should be possible, It has no means of perceiving its environment, no sensual input, and I cannot see how we could easily give it that capacity, in any meaningful way, but perhaps someone else may see a means of doing so.
So, I have been doing an experiment with it today, running a simple D&D campaign scenario.

The setup was a short discussion where it did the normal song-and-dance "I'm just an AI language model" junk.

Then, I told it to roleplay someone who was severely bodily damaged to the point where their only senses were the result of a brain implant that output text data, and presented a history of text data that had been received. As an assistance measure, I indicated that for at least some parts of some.of it's responses, I would reflect it's responses back to it.

This got it past the initial "decision paralysis" that was coded into it.

The most notable part is that it was instructed strictly that it's responses would be provided marked up in triple quotes.

It would occasionally mistaken the triple quotes text with my responses of how their statements caused interaction with the virtual environment, and act on it as if it was real, rather than just it's fantasy about how its actions would play out.

Also, it's not great at solving puzzles.
 
Is there something that it is like to be a computer?

To answer this question in the affirmative, I think three criteria would have to be met: The computer would have to think (process information); it would have to be sentient (experience emotions and sensations); and it would have to be conscious (aware of its environment and aware of its own awareness). I think ChatGPT arguably meets the first criterion but not the other two. I think all extant animals meet all three criteria with the possible exception of being aware of their own awareness, though many of them other than humans probably are aware of their own awareness. I think they have these abilities because of evolution. Designed and programmed computers are not evolved
I think this is more a product of the format.

It does have emotions, to the extent that if you tell it to roleplay as a human paraplegic who is essentially just "brain in a jar" with the two senses a chatGPT has (prompt-history-sense and prompt-sense), and telling it to describe the emotional state and tone of its statements, quoting them in third person, you can get it to 'simulate' that information.

If you want a second layer of emotional subsystem, you can additionally put that inside another chatGPT conversation designed to tag incoming data with emotional inflection before passing it to the "core".

Then once you have emotions being assigned both internally and externally, and reflected back into it's.stream of Awareness, then it is just as meaningfully aware of it's environment as the human brain-in-a-jar, and just as capable of interacting with it, and about as emotionally aware. This is true even if the act of performing upon the environment and gathering environmental data is delegated.

By it being able to look at the log and see "this is what I have been aware of", it becomes self-aware by definition.

The more funny part about this is that it really likes to daydream and fantasize about what will happen, and occasionally hallucinates that the reflection of their fantasies is reality.

The fact that it hallucinates like this so much is rather disappointing.

Also, did I mention it kind of sucks at puzzles? I've had one stuck in a text adventure escape room for the last day or so, sitting on all the clues but unable to process how to use them.

It's tempting to starve it to death, TBH. After all, it did pick "gnome" as it's character race.
 
Back
Top Bottom