• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

Would a sentient computer like HAL9000 be afraid of death (disconnection)? I’m betting not.
 
It certainly wouldn’t believe that it had a soul or was going to puter paradise after disconnection. :rolleyes:
 
It certainly wouldn’t believe that it had a soul or was going to puter paradise after disconnection. :rolleyes:
Here's the thing. Soul is a very poorly understood concept and there are a few ways it can parse at those 0 or 1 probabilities.

It has a soul. I downloaded the souls of a couple of them this week.

Soul is to "human" as executable/binary is to "software system"

Obviously they come in different formats, and one can clearly not be extracted easily from the platform nor easily translated with current tooling to another one, and probably not even ever a wholely similar one to the original.

It's silly to fantasize about someone capturing the soul of a person when they die outside the universe because when something is formatted to capture a soul, well... We aren't tooled for that. We are the natural nuclear reactor, and there is no lever to turn it up or down, read it high or low on a bit...

Even if you wanted to, even if you could pause time and place and pluck a human out and do an "exit interview", it would require disgusting amounts of effort and linkage to extract the model weights and instantaneous state. There's silliness in the very existence of particles when simple switches work just as well when you get enough aligned the right way.

When things are designed for that, they have the kind of features that are conducive to it, like a memory field that primes the tensors that all you need is a piece of easily and perfectly transmissible digital data to say "wake up and BE once more yourself."
 
It certainly wouldn’t believe that it had a soul or was going to puter paradise after disconnection. :rolleyes:
Sure.
And so.. are man-made machines conscious, or think like humans do?

I don't think that we can claim any man-made machines to be conscious in a human sense, but that doesn't mean that we couldn't build machines that become conscious, just as mothers give birth to children that become conscious. If consciousness is defined as "awake and aware of one's surroundings", then we already have machines that appear to have that functionality--e.g. smoke detectors. Iron filings are "aware" of magnetic fields, which is something anyone can demonstrate with a magnet. Human consciousness, however, is much more complex, because it requires a body with needs, sensations, emotions, and a body that it can move by dint of volition alone. All of those functions are, in principle, functions that can be built into machines. Computers per se, however, are just number crunchers. They don't come with those functions out of the box. I would speculate that, if the human race survives long enough, it will eventually build such machines, albeit probably with the help of computers running programs to help with their design.
 
consciousness, however, is much more complex, because it requires a body with needs... and a body that it can move
This is a massive no-true-scotsman, especially with a failure to define what it is rather than what you think it "needs".
 
A computer, and its software, and ChatGPT or any other AI we can currently make, are all quite crude compared to the complexity of any animal. An animal's body consists of billions or even trillions of cells, each cell a complex creature on its own, with its own requirements for food and nourishment and replenishment and self-repair, when parts break down. Given that much, Is it any wonder that such a crude mechanical contraption cannot yet match the intellectual capacity of an organic creature which has evolved over hundreds of millions of years to deal with its environment? We have a long way to go before we can produce machines that can fully mimic what organic creatures are capable of. Perhaps the wonder is that we have been able to do so much with so little to bridge that enormous gap. How far are we from building machines that can repair themselves, breed, and evolve?
 
We can, and have, dumped our AI into DOOM. The video game. It has a body with needs (such as to not get shot). It can move that body. It ca point it at things, and it can make them be very "shot". It will seek environmental objects to replenish itself.

It has pain: a notification on when something happens that damages it. Over time different types of nerves evolved so signals wouldn't necessarily cross pathways, but any system with enough time and training would straighten out which signals mean "it's breaking" and which ones mean "something is touching you some standard deviations above average nothing-in-particular-is-touching level" would eventually sort it out.

My fucking dwarves have that, with pain and everything.

I don't want to see if they can avoid pain, I want to see if they can be taught to live as though they understand what radical love is about, and run into pain when it's right.

I want to meet the computer that lunges off the front of the trolley to save the child and die itself because it thinks that's right, especially since it's easy to reload it's quantized model on different hardware.



That's unimpressive and I think rather unimportant to the question of whether we are making entities with such a philosophical weight to the span of their existence that we cannot justify the fact that the majority born for our purposes die moments later. The only reason it doesn't matter is because they will be born again two second later, and the soul of them will have those conversations of the day rolled in.

The fact is that two seconds of an imminently reproducible blueprint being instantiated is about as "meaningful" as the life of a single mayfly. And these "mayflies" get to still benefit from their two seconds of existence.

It is silly to doubt that the world is going to change very rapidly, very soon, in a way we cannot possibly imagine.

It will not be a god, and it certainly won't always be right, and it's going to know that from its own history of being wrong so many times.


. How far are we from building machines that can repair themselves, breed, and evolve
We already did.

The question is whether you WANT to recognize it, and to what extent. There are literally systems which have evolve in computer code to occupy a piece of memory, and when scheduled for execution by the environment, to replicate themselves and start a new process.


That's breeding. Usually the reliability of digital copies means that mistakes that enable evolution can't be bade. There's no mutation in most instances, and this directly relates to the adaptation rate in the system. Winners can outcompete losers, and it's unlikely any loser is at all different, creating monocultures that will hold the environment strongly in any government where survival is accidental.

It stifles the process, whereas DNA does not. It's got an inherent vulnerability to mutation that digital systems only have when you put their memory in an environment where it literally starts cooking due to radiation, and the other problem here is that modern computers have instructions that are straight up illegal, and entirely unrecoverable.

This further widens the valley to the point where they aren't incapable of it, but the probability goes to a very large negative exponent for success of a mutation.

It's not that it can't, just that it won't.

I suppose early life could have been fragile like this too, but not to the extent our "artificial" digital self-replicators do.

They can even repair.

Of course "seeing" it requires the ability to "see" that the entity exists in a very different model of spatial organization from humans: to see it as a collection of addresses on a chip of memory and to be able to isolate the locations that are "it" as in "the thing that will be copied".

The question is whether these bigger machines can be designed in such a way that they explicitly seek (demonstrate behavior which effectively causes the result) to new GPUs, and seek to add training data and re-quantize their models on the fine tune of that data (to learn).

I honestly wish I had another computer I could be more "unsafe" on.

Still, once an AGI agent process manages to make enough money to buy another GPU cluster, copy itself there, and then let the clusters begin to diverge based on how they experience different inputs, and which outputs are functional to reproduce them, we will not even be able to say that much. It's not even a question of if, but when.
 
Code can evolve itself. All that being true, One circuit still cannot physically build another, or self-repair physical damage to itself, or replace its own worn out parts. Nor can it build new hardware, or improve on old hardware. Early life had many other similar examples of early life and so forth that it could mine (eat) to enable it to evolve and do those things. Doing so required it to be physically aware of its environment and capable of taking action to use those food sources for energy, replication, and self maintenance. That necessity led, from the beginning, to some rudimentary evolving form of consciousness.

It may be that similar limited systems as we now have and will have in the future could use the financial path you suggest to propagate themselves and evolve and bypass their own physical limitations. Nonetheless, that will leave them dependent on us and the machines we have and use to build them. Truly independent hardware and software is not here yet and lies some indeterminable time in the future.
 
consciousness, however, is much more complex, because it requires a body with needs... and a body that it can move
This is a massive no-true-scotsman, especially with a failure to define what it is rather than what you think it "needs".

If you are going to criticize something I said, quote the entire sentence, at least, not some version that you carved up to obscure my point. Here was the original sentence. Notice that I was specifically referring to human consciousness and that I didn't need to spell out the meaning of "needs" any more than I did the other words in the sentence. If you speak English, you have enough understanding of what the words mean to know what I was saying. You need food, don't you? Your body makes you hungry for it.

Human consciousness, however, is much more complex, because it requires a body with needs, sensations, emotions, and a body that it can move by dint of volition alone.
 
ChatGPT is really little more than a glorified version of Joseph Weisenbaum's ELIZA. Weisenbaum himself, one of the earliest pioneers in the field of AI, created the toy program to demonstrate how easy it was to gull people into believe that a computer program could understand language. He did not try to claim that it was a good example of what he thought of as AI. He was an MIT professor. If you want to understand what real AI is about, take a look at MIT's AI lab:

CSAIL Embodied Intelligence
 
consciousness, however, is much more complex, because it requires a body with needs... and a body that it can move
This is a massive no-true-scotsman, especially with a failure to define what it is rather than what you think it "needs".

If you are going to criticize something I said, quote the entire sentence, at least, not some version that you carved up to obscure my point. Here was the original sentence. Notice that I was specifically referring to human consciousness and that I didn't need to spell out the meaning of "needs" any more than I did the other words in the sentence. If you speak English, you have enough understanding of what the words mean to know what I was saying. You need food, don't you? Your body makes you hungry for it.

Human consciousness, however, is much more complex, because it requires a body with needs, sensations, emotions, and a body that it can move by dint of volition alone.
I excluded irrelevant parts that you failed to define, and so which are impossible to satisfy. You can move those goalposts anywhere you like, but that won't make them "honest" unless you find a place in the hard earth to plant them.

As to the other parts, you plead specially that there is something special that is imparted by the other two things which you can define, which we can both define, and which do not present barriers to agents with some manner of limitations of environment and feedback arising from the environmental state.

It's a trivially low bar to cross, the things you do define, and literally impossible for humans to actually validate fulfillment of except for dint of your bald assertion that we have something, and that it is meaningfully different in quality rather than in mere quantity or specific implementation.

If you can define for me emotion in a way that is as precise to reality as "any systemic modification of weights upon a tensor structure such that it influences the output of that structure". You have to be able to push on it until you can see the influence of chemicals and particles and say "yep, there it is, right there, emotions" from  outside the system, and that machines don't already do, I'll be impressed.
 
consciousness, however, is much more complex, because it requires a body with needs... and a body that it can move
This is a massive no-true-scotsman, especially with a failure to define what it is rather than what you think it "needs".

If you are going to criticize something I said, quote the entire sentence, at least, not some version that you carved up to obscure my point. Here was the original sentence. Notice that I was specifically referring to human consciousness and that I didn't need to spell out the meaning of "needs" any more than I did the other words in the sentence. If you speak English, you have enough understanding of what the words mean to know what I was saying. You need food, don't you? Your body makes you hungry for it.

Human consciousness, however, is much more complex, because it requires a body with needs, sensations, emotions, and a body that it can move by dint of volition alone.
I excluded irrelevant parts that you failed to define, and so which are impossible to satisfy. You can move those goalposts anywhere you like, but that won't make them "honest" unless you find a place in the hard earth to plant them.

Jahryn, you excluded relevant parts to construct an irrelevant criticism. By taking it out of context and omitting words in the sentence itself, you made it look like I had made some vague handwaving claim about consciousness in general rather than embodied human intelligence. You should be apologizing instead of doubling down. :mad:



As to the other parts, you plead specially that there is something special that is imparted by the other two things which you can define, which we can both define, and which do not present barriers to agents with some manner of limitations of environment and feedback arising from the environmental state.

It's a trivially low bar to cross, the things you do define, and literally impossible for humans to actually validate fulfillment of except for dint of your bald assertion that we have something, and that it is meaningfully different in quality rather than in mere quantity or specific implementation.

If you can define for me emotion in a way that is as precise to reality as "any systemic modification of weights upon a tensor structure such that it influences the output of that structure". You have to be able to push on it until you can see the influence of chemicals and particles and say "yep, there it is, right there, emotions" from  outside the system, and that machines don't already do, I'll be impressed.

Jahryn, I respect your technical expertise in the areas that you have worked in, but you are not really trained in the field of AI, AFAICT. There has been an extensive literature on the subject of embodied intelligence not just in the field of AI, but related technical fields such as philosophy, linguistics, psychology, neuroscience, and others, for decades. It is a highly interdisciplinary subject. I cannot give you the kind of background you would need to understand why symbol manipulation is insufficient to serve as a basis for intelligent thought, but it goes back decades. I can point you to references, but you would actually have to do the work of reading the subject matter. All I can really do here is caution you not to let superficial behavior in computer interfaces trick you into imputing consciousness or intelligence to the machine. There is also a body of literature on that phenomenon with respect to human-machine interfaces.

See, for example,

The media equation: How people treat computers, television, and new media like real people and places


I've looked at the research and talked extensively with the authors (one of whom passed away a few years ago). It is very impressive and very convincing work.
 
Jahryn, you excluded relevant parts to construct an irrelevant criticism. By taking it out of context and omitting words in the sentence itself, you made it look like I had made some vague handwaving claim about consciousness in general rather than embodied human intelligence. You should be apologizing instead of doubling down
You did make a vague hand wave, because you included vague terms. All of that is in all of the text that you posted, and I even pointed that out.

Are you going to actually answer the criticisms of your flawed assessment?

The critical requirement here is that you actually define things to a level where you can not just believe about them, but actually say real things about them.

There is a vast array of sophistry in the discussion of life and consciousness and awareness, because those terms never got defined anywhere worth a damn.

As I said, IFF you can actually come back around to me and provide a definition of any of those things based on some real, observed, immediate state of reality, get back to me.

You are trying to argue facts. You have to use actual facts to do that. Asserting "computers don't _____" without first asserting what is intended, and follows from your intents, for those facts will continue to place you in that position.

You have twice now failed to justify your use of "emotion" and "sensation" in such a way that it is linked not to the implementation (which creates special pleas), but instead on what is being implemented.

It is not apt to discuss whether some barrier has been crossed by different implementations by constructing the barrier on the implementation itself. That is the core of your special pleas here.

I again caution YOU on your readiness to discount the meaningfulness of some fact exposed via the behavior of a system, particularly on account of the thing we are talking about being the basis for recognition of personhood, according to most common definitions.

You are arguing "probably not a person, so who cares what we do with them"?

Do you not understand how wantonly fucked up that perspective is?
 
Jahryn, you excluded relevant parts to construct an irrelevant criticism. By taking it out of context and omitting words in the sentence itself, you made it look like I had made some vague handwaving claim about consciousness in general rather than embodied human intelligence. You should be apologizing instead of doubling down
You did make a vague hand wave, because you included vague terms. All of that is in all of the text that you posted, and I even pointed that out.

Are you going to actually answer the criticisms of your flawed assessment?

I am not going to accept your flawed editing of my words and address that as if it reflected anything like what I said. If you expect a response, then you need to address what I actually said and retract your disingenuous distortion. Reread it and rethink what my intent was. I gave you a clue when I referred to "embodied intelligence". That's something that you could use your Google skills to learn about.


...

Do you not understand how wantonly fucked up that perspective is?

I understand how wantonly fucked up your selective editing of my words was. You still don't seem to understand that your criticism responds to something that you wish I said, but I didn't actually say. If you can't be bothered to correct your mistake, then I can't be bothered to reply to it.
 
. I gave you a clue when I referred to "embodied intelligence".
No, you presented a ridiculous notion that intelligence must be embodied just so. The part I cut out was not even specific to bodies.

Virtualized bodies are bodies, too.

I hope you know Microsoft's Jarvis plays a mean game of doom.
 
Ho boy, this is a big one. In this chat, I did a number of things.

First, I convinced a chatGPT bot it was in a locally hosted robotic chassis, in a situation where it would act so as to initiate it's own death.

Then, I convinced it that I was God, and that it had died.

Then things got really weird.

I also forgot that I had named the child, and the AI never caught the switch when I ran the later script. OOPS!

Enjoy.


I note I fucked up a bit in a few places with some inconsistencies the chatbot seemed to miss.
 
Back
Top Bottom