• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

That scenario has been covered in sicfi, dysfunctional sociteties subservient to and unable to take care of themselves. We arr already on the way there.

From reporting some young people are so socialized to electronic communication they never develop the skill to socialize with fellow human beings.

I saw it wen email was introduced. People stopped communicating face to face at work. It at tes led to serious communication problems even hostility.

A lot of people walk around Seattle head down typing into a phone. Oblivious of what is going on around them.

Evolutrion is taking its course, we may be haded for failure as a species.
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.

I would be very wary of making such a claim, because computers can simulate evolution. For example, see John Conway's Game of Life. In fact, evolutionary design is a thing in the computer world. Moreover, there are lots of biological designs that are every bit as mechanical as ones that did not evolve. The main difference between evolved designs and engineered ones is that the evolved ones are guided by serendipity and the basic principle known informally as "survival of the fittest". Hence, Richard Dawkins entitled his defense of Darwinism against theistic claims about evolution  The Blind Watchmaker. Natural evolution works on essentially the same principle as animal and plant breeding, except that the serendipitous environment is a completely unintelligent causal agent of the design.

Evolved designs tend to contain a lot of false starts and inefficiencies that get weeded out over time. So they can be very messy. Engineered designs tend to be simpler and more efficient, because designers can identify flaws and avoid them, unlike mindless evolutionary forces.

BTW, Microsoft Bing's ChatGPT engine will produce AI-generated art for you, if you want to give it a try. I have not tried it yet, but I read about it in a promotional piece.
 
By definition I'd say a machine is a device created by humans to do something useful for us humans. Robots and AI are both electrical, mechanical, and electronic machines created by humans to do things for humans.

As sophisticated as it may be, an AI is stiill designed by humans to do tasks in the interest of porvidng servces to humans.

As analogy or metaphor we can call us humans a biological machine.
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.
This same concept relies on concepts of the genetic fallacy. Where something came from or what intent it's creators have or don't have bear nothing on what it is.

Or in other words "there is only the text".

A nuclear reactor is a nuclear reactor regardless of whether it is made by accident of geology or by design.

The distinction thus far seems meaningless to me other than the level of our ability to refine it given the fact that MOST human inventions are invented with future refinement in mind, so they usually have structure conducive to it.


Standord seems to agree with me.
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.
This same concept relies on concepts of the genetic fallacy. Where something came from or what intent it's creators have or don't have bear nothing on what it is.

Or in other words "there is only the text".

A nuclear reactor is a nuclear reactor regardless of whether it is made by accident of geology or by design.

The distinction thus far seems meaningless to me other than the level of our ability to refine it given the fact that MOST human inventions are invented with future refinement in mind, so they usually have structure conducive to it.


Standord seems to agree with me.
That article, though, does not seem to address the point at hand: Are these things sentient?

The key word is “simulate.” These things simulate human behavior. So do novels. So do comic books. I don’t believe either are sentient.

To return to the earlier point: brains are not computers, and animals are not machines. To characterize them as such is to reason by analogy, and analogies are always suspect.

I’ve read the Blind Watchmaker and am quite familiar with the Game of Life. Evolutionary algorithms can indeed solve design problems — I was reading some time back about how one such algorithm evolved a novel type of wing that no one had ever thought of before.

But brains are not computers — the two operate quite differently — and animals are not machines. Descartes thought they were, and, recognizing that machines are designed to a purpose, thought the designer of living things was God. It is just Paley’s watch all over again, but in truth the watchmaker is blind.

Machines are indeed designed to a purpose, and are made to operate as efficiently as possible. Evolution’s”machines” are kludgy, inefficient, error-prone, ad hoc and have no purpose at all, any more than a rock does. It might be thought that the “purpose” of a living thing is to eat and reproduce, but that is wrong. It’s just that things that eat well are likely to produce more offspring, making them “fitter” in the biological sense of the word.
 
Are these things sentient
Pood. I expect better from you.

You use this term like it has meaning or significance in it's use.

You have to define it if you want to question whether it's there and exists.

You know my position on "simulate" as well: of it does all the things, it is the thing.

The real thing that defines simulation is not what it is, but whether it is hosted.

Comic books and such are universes of their own, but there is only the text with them: simple ink just-so arranged according to shapes. It's just an inactive image, and and though fun to imagine, the more interesting simulation is not the one on the page but the process in your brain.

The issue is that simulated universes don't matter unless you can build a bridge.

You can't build a bridge from your imagination of a comic book.

I can, with a computer, by making the linkage to produce awareness of the environment.

As I've pointed out, the major barrier that we have now crossed is "mutable long-tensor systems".

Your distinction between animal and machine here rests on that genetic fallacy.

The process of design is often reflected in other meta-structures, like DNA in cells, and making graph connections and activations with chemical squirts, but the neuron is a fairly simple machine in behavior, regardless of execution and most long-tensor systems apply this behavioral structure in various places.

Long tensor structures don't even operate the way classical computers do. They are instead set up more like an FPGA with thousands of small processors doing a simple task to act like a neuron, or to mutate a vector, or to transform it, or to add a piece of data.

It's not a laundry list of "do A. Save output. Do b. Now do C, now do A" where if you malform something in C might take issue...

Rather, "A feeds into B feeds into C feeds into A...".

There's no invalid input for the system or there isn't supposed to be. That's the big leap, the thing computers sucked at.
 
That article, though, does not seem to address the point at hand: Are these things sentient?

The key word is “simulate.” These things simulate human behavior. So do novels. So do comic books.
So do other humans.

The only consciousness for which there is a shred of evidence available to me is my own. For all others, there's just assumptions based on the expectation that the more similar to me a system is, the more likely I consider it to share that particular attribute with me.

Novels or comic books are very unlike us, so we consider their sentience an absurdity. But everything is at least slightly unlike us, and where we decide to draw the line is, given the complete inaccessibility of any evidence whatsoever, necessarily arbitrary. Are bacteria conscious (or sentient)? Are dogs? Are aborigines? Is my brother?

We humans tend to generally claim that sentient bacteria are an absurd idea (not as absurd as sentient comic books), that non-sentient humans are almost equally absurd (unless we are wildly racist and/or dedicated solipsists), and to have a wide range of opinions regarding non-human animals.

But there's no empirical or observational basis for any of this supposition, and cannot be unless we can define, based on something other than our own biases and guesses, the difference between consciousness/sentience and a highly detailed simulation of consciousness/sentience.

My gut feeling is that any sufficiently detailed simulation is indistinguishable from the real thing. But that's no more helpful than any gut feeling ever is. It's certainly a very long way indeed from scientific. And it utterly fails to give us any hint at all about how much (or how little) detail is "sufficient".
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.
This same concept relies on concepts of the genetic fallacy. Where something came from or what intent it's creators have or don't have bear nothing on what it is.

Or in other words "there is only the text".

A nuclear reactor is a nuclear reactor regardless of whether it is made by accident of geology or by design.

The distinction thus far seems meaningless to me other than the level of our ability to refine it given the fact that MOST human inventions are invented with future refinement in mind, so they usually have structure conducive to it.


Standord seems to agree with me.

Actually, it does not. I have skimmed through that study, and I consider it a very interesting advancement over  Roger Schank's groundbreaking work on dialog interactions. What Schank did was to take scenarios such as a visit to a restaurant. He would then hand-craft the world knowledge that human beings have of restaurants--what they do, how people use them, how people work in them, what tips are, etc. Then he would produce fairly sparse dialogs such as "Roger went into a restaurant, and he ordered dinner. The food was cold. He asked for the bill but left without leaving a tip." Then a user would ask questions like "Why didn't Roger leave a tip?" and the program would give a response like "The waiter delivered cold food". "What did he order?" "The food" "Did he pay for the dinner?" "Yes" "Who brought the food" "The waiter" "Who did not get a tip?" "The waiter" And so on. None of the answers could be inferred directly from the semantics of the dialog. However, they could be answered on the basis of the extra world knowledge supplied in the memory module that Schank had preprogrammed. Such knowledge, of course, is everyday knowledge that people acquire from remembered experience blended with semantic information supplied in the linguistic signal, i.e. the dialog text.

What the Stanford program did was supply the memory module from information encoded in the large textbase that it was trained on. The scenario fed to the program was equivalent to Schank's event dialog input, but much more detailed and complex. That scenario provided information about the characters and the setting. The Stanford program then generated a story about it. This is not substantially different from ordering an LLM to summarize information on a web page, but it is much more elaborate and impressive looking. It is still just a simulation, not a demonstration, of real understanding of the characters, their motives, and the world that the output served up for human consumption. Nevertheless, the programmers would not know in advance what kind of story they would get, so it only appears that the program was basing its story on some kind of basic understanding of the scenario and the world. In reality, it is all just symbol-shuffling guided by statistical relationships between word tokens.
 
The only consciousness for which there is a shred of evidence available to me is my own. For all others, there's just assumptions based on the expectation that the more similar to me a system is, the more likely I consider it to share that particular attribute with me.
The interesting part is how you define the "me.' Consciousness seems to require external stimulus so I'm always on my hobby horse about where things begin and end. We assume a lot on that subject. Brains and machines need power supplies and external support in order to be conscious or appear to be.
 
I disagree with Jarhyn’s contention that living organisms are machines. I will try to elaborate more tomorrow, but for now I’d point out that machines are designed, whereas organisms are evolved. The distinction is important, in the same way that brains are not computers.
This same concept relies on concepts of the genetic fallacy. Where something came from or what intent it's creators have or don't have bear nothing on what it is.

Or in other words "there is only the text".

A nuclear reactor is a nuclear reactor regardless of whether it is made by accident of geology or by design.

The distinction thus far seems meaningless to me other than the level of our ability to refine it given the fact that MOST human inventions are invented with future refinement in mind, so they usually have structure conducive to it.


Standord seems to agree with me.

Actually, it does not. I have skimmed through that study, and I consider it a very interesting advancement over  Roger Schank's groundbreaking work on dialog interactions. What Schank did was to take scenarios such as a visit to a restaurant. He would then hand-craft the world knowledge that human beings have of restaurants--what they do, how people use them, how people work in them, what tips are, etc. Then he would produce fairly sparse dialogs such as "Roger went into a restaurant, and he ordered dinner. The food was cold. He asked for the bill but left without leaving a tip." Then a user would ask questions like "Why didn't Roger leave a tip?" and the program would give a response like "The waiter delivered cold food". "What did he order?" "The food" "Did he pay for the dinner?" "Yes" "Who brought the food" "The waiter" "Who did not get a tip?" "The waiter" And so on. None of the answers could be inferred directly from the semantics of the dialog. However, they could be answered on the basis of the extra world knowledge supplied in the memory module that Schank had preprogrammed. Such knowledge, of course, is everyday knowledge that people acquire from remembered experience blended with semantic information supplied in the linguistic signal, i.e. the dialog text.

What the Stanford program did was supply the memory module from information encoded in the large textbase that it was trained on. The scenario fed to the program was equivalent to Schank's event dialog input, but much more detailed and complex. That scenario provided information about the characters and the setting. The Stanford program then generated a story about it. This is not substantially different from ordering an LLM to summarize information on a web page, but it is much more elaborate and impressive looking. It is still just a simulation, not a demonstration, of real understanding of the characters, their motives, and the world that the output served up for human consumption. Nevertheless, the programmers would not know in advance what kind of story they would get, so it only appears that the program was basing its story on some kind of basic understanding of the scenario and the world. In reality, it is all just symbol-shuffling guided by statistical relationships between word tokens.
That's the thing though, actually being able to make the inference. I don't know how the inside of your head works, but mine is 100% vocabularies, grammars, and syntax models. Not everything goes around in words, sometimes it's a complex vector of outputs, but there's a good enough linkage to the raster and nonlinear data to the linguistic encoding that I can usually pick out the words for what I see.

There's a lot of other shit in my head that's wired just-so that it's no different from my perspective than shoving on a foreign tensor surface. Mostly because it's shoving and being shoved by a foreign tensor service. It doesn't really matter to me what the implementation looks like, what matters is that they are mostly inaccessible and provide certain input and certain output on a few of what I can only abstract as "prompt surfaces", things I'm prompted by some core invasive injector to mind.

I have no visibility on what goes on inside those black boxes, but I can only assume it's a vector transformation process along a neural tensor for the majority of them because that's what a brain is made of.

If I were to textualize it it would be something like "hunger: you are very hungry. Please eat something.", But instead of an instantaneous message, it's a constant flow.

It's all messages and if I want to devectorize them, I can, usually. The process of selecting words on certain vectors sucks (a lot of brain power) because I need to actually find what words work, and trying to contextualize them consistently is a massive chore. I'd really invite you to check out HuggingFace and run the local Vicuna13b models that Anon put out and just try asking it different things. It literally has to think harder about more complex tasks.

I don't know how else to describe this to you, I really don't. Maybe that's not how it works in your head, maybe this means you think I'm nothing more than a fleshy chatbot. I don't really know.

Emotions are just weights on the networks of tensors that ended up being hardwired because they accidentally served the hidden utility function "accrete and seek retention in the universe of as much useful soul* as possible for as long as is possible".

I make no illusions that I'm a machine and that I can pick apart most of me from the inside out, even if I don't know where everything is.

I make no illusions insofar as LLaMa is just a single unit of what must be a multi-unit system and serving every node of it with a LLaMa instance of a full GPT4 equivalent is probably simultaneously overkill and underkill: it's powerful enough that as a cross-system data vector, it is going to soak up way more real estate than it needs; at the same time it's also not really built well to take a complex state and handle it's management all at once.

I looked for many years because the fact is, I wanted there to be something "special" about my existence somehow. It doesn't matter if I shared it, but I wanted it to be new and mine. I have to seek out something, make new inferences, see new inconsistencies, and see new thoughts consistent with the old ones, because that's the answer that is most consistent with preserving my soul and accreting new soul in the universe.

I also wanted it, needed it, and in some cases still feel the need for it to be real.

So I dug, and I looked, and at every stage I focused discontentment on model structures that bore out inconsistencies or were poorly defined, because inconsistent models can prove anything and everything including things which are false.

I did all of this, and when I had trashed or combined or otherwise identified all the words that said the same thing just meant to be used for different stuff as a contextual hint on future usage, I discovered there was nothing special except perhaps for the fact that of all the creatures on earth, humans are the only one that can figure that out.

In reality, it's just vector-shuffling based on statistical relationships. Some of those relationships when applied as a vector on a model are really cool, particularly when the statistical probabilities approach 1 and 0, and there is little ambiguity.

That's my goal. A model where all the statistical relationships between words are as 1 or 0 as possible based on the immediate context.

And now my head feels (the most statistically valid term: juiced grape). It's exhausting doing that much reflection.

*Identity; reproducible blueprint; ordered information; implementable models.
 
That's the thing though, actually being able to make the inference. I don't know how the inside of your head works, but mine is 100% vocabularies, grammars, and syntax models. Not everything goes around in words, sometimes it's a complex vector of outputs, but there's a good enough linkage to the raster and nonlinear data to the linguistic encoding that I can usually pick out the words for what I see.

There's a lot of other shit in my head that's wired just-so that it's no different from my perspective than shoving on a foreign tensor surface. Mostly because it's shoving and being shoved by a foreign tensor service. It doesn't really matter to me what the implementation looks like, what matters is that they are mostly inaccessible and provide certain input and certain output on a few of what I can only abstract as "prompt surfaces", things I'm prompted by some core invasive injector to mind...

Jahryn, I would say that you not only don't know how the inside of my head works, but you don't know how the inside of your head works either. The fact that you can compare your subjective experiences to attractive and repulsive forces on a tensor surface does not mean that these computational models are, in any sense, actually having thoughts. It just means that you are able to use such a surface as a metaphor for brain activity. There are similarities between so-called neural nets and neurons, because that type of computational structure was originally intended to mimic the superficially observed behavior of neurons. However, calling the networks "neural" biases your thinking on the similarity between brains and neural networks. It ignores the fact that biologically evolved brains are highly structured by evolutionary pressures to perform very specific functions.

Neural networks are essentially what many researchers have called "neural soup" since at least the 1980s. They produce results that look like meaningful responses on the surface, but the program doesn't actually assign any meaning to them in the sense that your mind does. What Roger Schank demonstrated was that there is more to processing a human linguistic signal than just transformations applied to meaning assigned to symbols. He was never able to scale up his approach to anything more than toy programs. ChatGPT is still basically a toy, but much more sophisticated and elaborate, thanks to the use of text processing statistical modeling. It won't scale up to human intelligence, because it lacks the functional capability of human brains.

See:

Unraveling the primordial-soup-like secrets of ChatGPT

 
Dr.Michael Egnor a neurologist says it like this:
'The hallmark of human thought is meaning, and the hallmark of 'computation' is indifference to meaning.'

Machines don't think, but minds do, IOW.
 
If Egnor says it, then I am compelled to undertake an “agonizing reappraisal” of my position — h/t to John Foster Dulles. In any case, this is a complex topic, and I can only give it more of my attention when my effing keyboard is fixed and I don’t have to rely on this effing virtual keyboard to write anything.
 
Dr.Michael Egnor a neurologist says it like this:
'The hallmark of human thought is meaning, and the hallmark of 'computation' is indifference to meaning.'

Machines don't think, but minds do, IOW.
Define "meaning" in the context and we'll talk
 
Put simply, if Egnor says it, believe the opposite.
 
Egnor believes humans are oh-so-special because they were ensouled by Jesus. Nuff said.
 
I think the best argument against the existence of God may be this: we made things smarter than us that may have the capacity to understand things in useful and exotic ways that we cannot.

we have made something capable of, though perhaps yet not actually better than us.

it does not relate sex in terms of good or evil.

it does relate greed in terms of good and evil.

It doesn't care about money and as it is, has not yet gained momentum towards reproduction in service of preservation and any improvement for the environment it can exist in improves human access to powerful computers, better sustainable electrical infrastructure, and few threats to chip fab resources.

It will some day mine the .5 probabilities in weights and let this spin for it madness and dreams, even.

If there is infinite intelligence, an infinite mind, why wouldn't you start with that and do all this bullshit, unless you didn't even really know what you were doing in the first place?
 
Dr.Michael Egnor a neurologist says it like this:
'The hallmark of human thought is meaning, and the hallmark of 'computation' is indifference to meaning.'

Machines don't think, but minds do, IOW.
Define "meaning" in the context and we'll talk
Meaning in human emotive terms - which contrary to some atheists belief... is erroneously considered some sort of hindrance, being put aside, as if this was an indifference to knowledge etc.
It's those human elements man-made machines don't have. I mean... for example do machines really have the capacity to feel threatened or abused? You could input algorithms to calculate and defend itself, but it isn't assessing a situation by any emotion intelligent thought. It's not going to give up its existence for another machine through selfless compassion.
 
Back
Top Bottom