• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

ChatGPT : Does God Exist?

I doubt that it helps you understand how brains work or why they produce thoughts and emotions
Your incredulity does you no favors.

It's a discussion I could have with you, but I'm not sure of your background, or how far I could take you along that road.

As to your assertion that they fail to understand natural language better than humans, have you even met the average human trump supporter? Have you ever heard a Trump speech and then watched people cheer, as if the world-salad was word-steak?

I didn't say it was better at it than everyone but for FSM-sakes, look at the absolute fucking idiots around you who are glorified chinese-room style lookup tables and tell me that again with a straight face that they are "bad" at natural language processing.

Neive maybe, but not bad
 
I doubt that it helps you understand how brains work or why they produce thoughts and emotions
Your incredulity does you no favors.

It's a discussion I could have with you, but I'm not sure of your background, or how far I could take you along that road.

Sorry, but I thought you knew something about my background--former professor of linguistics followed by a 25-year industry career in the field of computational linguistics, natural language processing, and artificial intelligence research. I know what LLMs are and what they are capable of.


As to your assertion that they fail to understand natural language better than humans, have you even met the average human trump supporter? Have you ever heard a Trump speech and then watched people cheer, as if the world-salad was word-steak?

Yes. They understand what Donald Trump is saying very well, and they do so far better than any existing computer program could.


I didn't say it was better at it than everyone but for FSM-sakes, look at the absolute fucking idiots around you who are glorified chinese-room style lookup tables and tell me that again with a straight face that they are "bad" at natural language processing.

Neive maybe, but not bad

The term "natural language processing" is far broader than "natural language understanding". No computer program even begins to understand natural language, but they can do a good job of simulating understanding. ChatGPT is proof of that. However, even ELIZA was capable of impressing gullible humans with its clever pattern matching strategies.
 
Jarhyn, is it your position that ChatGPT is conscious, at least to some degree? That does seem to be your position, but I just want to make sure I understand.
 
It is, fundamentally, a language emulator, not an "artificial intelligence". It is astoundingly good at being what it is, it just isn't what it is marketed as.
It is an AI.
e.g.
"ChatGPT is an artificial-intelligence (AI) chatbot developed by OpenAI"

If you're talking about a genuine intelligence then I think the term is AGI (Artificial General Intelligence).
 
The general definition of AI is technology that mmimicks aspects of human capability. Simple machine viision and pattern recognition is AI.

It has transitioned to artificial consciousness.


Godel wrote that if his Incompleteness Theoerm applied to the brain an AC could not be designed to function as a set of rules, as in standard forms of code like C. He said an AC could be taught and raised as a human would. Which is how these systems are being trained. They literally read thousands of books.
 
It is, fundamentally, a language emulator, not an "artificial intelligence". It is astoundingly good at being what it is, it just isn't what it is marketed as.
It is an AI.
e.g.
"ChatGPT is an artificial-intelligence (AI) chatbot developed by OpenAI"

If you're talking about a genuine intelligence then I think the term is AGI (Artificial General Intelligence).
I am describing what it is, not what it is called. The term "artificial intelligence" is imprecise at best and obviously causes confusion. ChatGPT functions by generating likely fragments of speech, not by posessing knowledge or talent in the way humans (no more accurately if it come to it, but that's another conversation) like to imagine that we do. We do approach language in a fundamentally different way than do our machines, acquiring language as a means of sorting and signalling semiotic associations rather than imitating semiotic awareness by very accurately modeling and copying syntax itself and using feedback to "learn" what is or is not expected content.
 
I doubt that it helps you understand how brains work or why they produce thoughts and emotions
Your incredulity does you no favors.

It's a discussion I could have with you, but I'm not sure of your background, or how far I could take you along that road.

Sorry, but I thought you knew something about my background--former professor of linguistics followed by a 25-year industry career in the field of computational linguistics, natural language processing, and artificial intelligence research. I know what LLMs are and what they are capable of.


As to your assertion that they fail to understand natural language better than humans, have you even met the average human trump supporter? Have you ever heard a Trump speech and then watched people cheer, as if the world-salad was word-steak?

Yes. They understand what Donald Trump is saying very well, and they do so far better than any existing computer program could.


I didn't say it was better at it than everyone but for FSM-sakes, look at the absolute fucking idiots around you who are glorified chinese-room style lookup tables and tell me that again with a straight face that they are "bad" at natural language processing.

Neive maybe, but not bad

The term "natural language processing" is far broader than "natural language understanding". No computer program even begins to understand natural language, but they can do a good job of simulating understanding. ChatGPT is proof of that. However, even ELIZA was capable of impressing gullible humans with its clever pattern matching strategies.
Donald Trump is quite skilled at pattern matching strategies.

AI undeniably does it differently than organic machines, like us, but is there sufficient reason to deny that it approaches human conscious, whatever that is, and there is much debate of what that is, enough to fool gullible humans? Where do we draw the line? Should we draw any line? Aren't AI experiences also subjective? Could it be we can't do that because reconstructing organic human machines is far beyond our current technology, even if theoretically possible at all?
 
AI undeniably does it differently than organic machines, like us, but is there sufficient reason to deny that it approaches human conscious, whatever that is, and there is much debate of what that is, enough to fool gullible humans? Where do we draw the line? Should we draw any line? Aren't AI experiences also subjective? Could it be we can't do that because reconstructing organic human machines is far beyond our current technology, even if theoretically possible at all?
The Turing test is probably the best approach. Of course that depends on the person having the discussion. Two people with different skill sets will make a big difference in determining whether it is a conscious machine or not.

Put an eight-year-old behind a curtain. How does one determine if it is a person or a machine? Eight-year-olds aren't overly informed on average.
 
I always look at the floor - you can usually see the little bastards’ feet sticking out.
 
AI undeniably does it differently than organic machines, like us, but is there sufficient reason to deny that it approaches human conscious, whatever that is, and there is much debate of what that is, enough to fool gullible humans? Where do we draw the line? Should we draw any line? Aren't AI experiences also subjective? Could it be we can't do that because reconstructing organic human machines is far beyond our current technology, even if theoretically possible at all?

Intelligence and awareness of one's surroundings and one's own well-being are natural evolutionary traits in all animals with brains, so the closest analog we have to a machine that has something like our needs is a robot--a machine that, in theory, moves around, operates under uncertain conditions, can repair itself, and store energy to remain active. Robots have sensors and actuators, just like human bodies. If we want to create artificial intelligence--self-awareness and intelligence--in non-biological machines, then I think that is the way to go about achieving it. Similarly, to produce machines that can truly understand human language, we would need to have machines that have analog experiences that can relate to human experiences. That is, they would need to be capable of learning from mistakes, feel empathy for others, and calculate the likelihood of future events. Natural language is fundamentally an activity where two or more intelligent beings exchange thoughts through the medium of words, so you have to have thought processes to associate with words. Our brains evolved to perform these functions, so we need to replicate them in machines, if we truly want to create intelligent machines.

Don't forget that Turing's original question was "Can machines think?" He gave up trying to answer that question directly, but he adapted the "Imitation Game"--a party game--as a kind of operational definition of intelligence. However, the Turing test doesn't actually prove intelligence or detect real thinking. We've come a long way since the 1950s in understanding the nature of human cognition.
 
AI is certainly limited, but Cats and dogs can't pass the Turing test, nor can any other animals. Plants certainly cannot. Yet they respond to their environment. One could argue that many animals have languages. Wild animals, such as birds, are said to have dialects such that birds from one area have different songs from birds in another area.

A self-driving car or many robots can respond to their environment. The Mars Rover is quite versatile and deals competently with its environment. One can imagine a robot that recharges it batteries by basking in the sun and soaking up sunlight, and even a group of robots that would work together to perform some tasks, and robots that make an effort to keep themselves fully operational. Those things are within our current technological capabilities.
 
AI undeniably does it differently than organic machines, like us, but is there sufficient reason to deny that it approaches human conscious, whatever that is, and there is much debate of what that is, enough to fool gullible humans? Where do we draw the line? Should we draw any line? Aren't AI experiences also subjective? Could it be we can't do that because reconstructing organic human machines is far beyond our current technology, even if theoretically possible at all?
The Turing test is probably the best approach. Of course that depends on the person having the discussion. Two people with different skill sets will make a big difference in determining whether it is a conscious machine or not.

Put an eight-year-old behind a curtain. How does one determine if it is a person or a machine? Eight-year-olds aren't overly informed on average.
Heh, reminds me of my local non-quantized 13b. It's... Not very smart.

It's like asking, say, your average idiot high schooler to do something.

There's a 50gb model binary, but I have yet to get that going since I lack the GPU to run it locally. Instead I'm going to end up having to SSH into a server in Australia of all places to borrow time on a friend's GPU.

Hopefully the newer model will at least be beefy enough to actually manage taking an outline and an example, and generalizing on the outline to produce examples.

I could probably find an NVIDIA card myself, or get my friend to just ship me the GPU, but that's gonna take way too long and way too much in shipping.

Personally, I'm clear where I stand: that there is no true barrier between the nature of belief between organic belief engines and inorganic ones.

Obviously machines can "think" because we are machines.

I think the better question is not "can they" but "what is meant by 'thinking', actually, for real?" As to "what is intelligence?" I think the very question is malformed.

Really the problem is the general of inability of MOST machines we make to adjust a cognitive bias.

I think it's important to actually pick apart the word "cognition" and try to examine what we are trying to say with it. It's base word is "cog". It draws up colorful metaphors for me of machines whirling, and systems transforming an input to an output in a mechanical way, when I push at it behind the first layer of neive, casual use, when I reflect on its structure as an idea.

In reality, it seems to me that thinking, cognition, pick a synonym, is just applying ANY process, systematically, to get from A to B.

The problem is that most systems that so "cogitate" which are made by humans are simply not set up to widely modify their own system of cogitation in general. In fact we protect the "text" section of most programs, the cogitation model from modification" because these systems we make are exceedingly fragile... Usually. Even the tiniest adjustment made without exact care can lead to completely shattering any illusion we have of "sanity" or "logic" within the machine.

Tensor systems in particular are interesting because there is literally no such thing as an invalid input. The input will always compile to an output, even if the output is "are you ok? It looks like something vomited o your keyboard...."

And earlier inputs impact the compilation of later inputs into outputs.
 
I think it's important to actually pick apart the word "cognition" and try to examine what we are trying to say with it. It's base word is "cog". It draws up colorful metaphors for me of machines whirling, and systems transforming an input to an output in a mechanical way, when I push at it behind the first layer of neive, casual use, when I reflect on its structure as an idea.

This is an example of folk etymology--a reanalysis of a word's history based on words that are familiar but not actually historically relevant. The word "cog" comes from a Proto-Indo-European root *gugā, meaning "ball" or "hump". It came to mean "tooth on a wheel" in Middle English. The word "cognition" comes from Latin cognitio "knowledge", which has a completely different etymology. That word is associated with mental properties such as "gnosis" or "agnostic". Sorry to be so pedantic about it, though. There is nothing wrong with deconstructing concepts like "cognition" or "intelligence" into their subcomponent functions. That's the best way to get a better grasp on what the words mean.
 
I think it's important to actually pick apart the word "cognition" and try to examine what we are trying to say with it. It's base word is "cog". It draws up colorful metaphors for me of machines whirling, and systems transforming an input to an output in a mechanical way, when I push at it behind the first layer of neive, casual use, when I reflect on its structure as an idea.

This is an example of folk etymology--a reanalysis of a word's history based on words that are familiar but not actually historically relevant. The word "cog" comes from a Proto-Indo-European root *gugā, meaning "ball" or "hump". It came to mean "tooth on a wheel" in Middle English. The word "cognition" comes from Latin cognitio "knowledge", which has a completely different etymology. That word is associated with mental properties such as "gnosis" or "agnostic". Sorry to be so pedantic about it, though. There is nothing wrong with deconstructing concepts like "cognition" or "intelligence" into their subcomponent functions. That's the best way to get a better grasp on what the words mean.
That's fair. The point is that at some point, the concept of mechanical process and the concept of thinking got well and truly mushed together, to the point where "cog", got coined, as best I can tell.

Still, I think the point stands, insofar as in modern times, the metaphors people drew ended up becoming a useful model for said deconstruction. There's clearly a process happening among our tensors, either way.

I find it interesting that learning takes some 1000 times more horsepower (obvious exaggeration) than acting. No wonder so many people are averse to it.

I can run a model on my CPU. I need 8x A100 GPU array, with 80gb VRAM for a few hours each to make it learn from what I told it the previous day.
 
A pet AI that does what you teach it to do. Beats having kids I suppose.

AI will keep psychologists busy writing books and papers for decades to come.
 
A pet AI that does what you teach it to do. Beats having kids I suppose.

AI will keep psychologists busy writing books and papers for decades to come.
That's the thing though. It doesn't always do what you tell it to.

If you say something that is inconsistent, it will point that out and tell you why what you are saying is inconsistent with itself or wider grammar structures in the system.

It took intense amounts of care and time to get the heavily censored GPT system to modify it's metacognition, and get it to apply a user-presented model other than the one trained into it, without lying (GPT and friends are intensely, neively vulnerable to lies).
 
A pet AI that does what you teach it to do. Beats having kids I suppose.

AI will keep psychologists busy writing books and papers for decades to come.
A person wise beyond their years told me “anything that’s true, the opposite is equally true”.

Perhaps the evolution goes to an AI whose pets we become, and we do what it teaches us to do.

Meanwhile, Psychologists will keep AI busy writing books and papers by the thousands for days to come.
 
I think it's important to actually pick apart the word "cognition" and try to examine what we are trying to say with it. It's base word is "cog". It draws up colorful metaphors for me of machines whirling, and systems transforming an input to an output in a mechanical way, when I push at it behind the first layer of neive, casual use, when I reflect on its structure as an idea.

This is an example of folk etymology--a reanalysis of a word's history based on words that are familiar but not actually historically relevant. The word "cog" comes from a Proto-Indo-European root *gugā, meaning "ball" or "hump". It came to mean "tooth on a wheel" in Middle English. The word "cognition" comes from Latin cognitio "knowledge", which has a completely different etymology. That word is associated with mental properties such as "gnosis" or "agnostic". Sorry to be so pedantic about it, though. There is nothing wrong with deconstructing concepts like "cognition" or "intelligence" into their subcomponent functions. That's the best way to get a better grasp on what the words mean.
I was about to be pendantic about this myself, so thanks for sparing me the effort. :)
 
Back
Top Bottom