• Welcome to the Internet Infidels Discussion Board.

Artificial intelligence paradigm shift

Intelligence or smartness are nebulous concepts that involve a number of different skills--experience, education, memory, pattern recognition, calculation, etc. You can be smart about some things and stupid about others. You can be confident in conclusions that are wrong and uncertain or doubtful in conclusions that are right. The field of AI has a goal of creating a machine that can think--a kind of mechanical person--but real AI research involves mimicking a wide variety of people skills. There is not going to be any sudden epiphany that produces a truly artificial thinking machine overnight. Imagining that there can be is useful to science fiction writers, but it can be harmful to AI researchers. Real progress in AI these days depends on the skills of a team of researchers, and that is just to solve some of the minor problems that go into creating real intelligent behavior in an artificial medium.
 
Should we call them artificial stupids? :)

The stalker owl offered up "the cow washed it's clothes yesterday".

Yeah, it's not actually going to make a difference in learning if the sentence describes the impossible but when you get stupid stuff you question your work before realizing it's truly stupid.
 
Apparently, LLMs are Artificial Psychics.

They operate in the same way that scammers who claim psychic abilities do.

https://softwarecrisis.dev/letters/llmentalist/

One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

There are two possible explanations for this effect:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.
Many AI critics, including myself, are firmly in the second camp. It’s why I titled my book on the risks of generative “AI” The Intelligence Illusion.

For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion.

I now believe that there is even less intelligence and reasoning in these LLMs than I thought before.

Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.

Falling for this statistical illusion is easy. It has nothing to do with your intelligence or even your gullibility. It’s your brain working against you. Most of the time conversations are collaborative and personal, so your mind is optimised for finding meaning in what is said under those circumstances. If you also want to believe, whether it’s in psychics or in AGI, your mind will helpfully find reasons to believe in the conversation you’re having.

Once you’re so deep into it that you’ve done a press tour and committed yourself as a public figure to this idea, dislodging the belief that we now have a proto-AGI becomes impossible. Much like a scientist publicly stating that they believe in a particular psychic, their self-image becomes intertwined with their belief in that psychic. Any dismissal of the phenomenon will feel to them like a personal attack.

The psychic’s con is a mechanism that has been extraordinarily successful at fooling people over the years. It works.

The best defence is to respond the same way as you would to a convincing psychic’s reading: “That’s a neat trick, I wonder how they pulled it off?”

Well, now you know.

Once you’re aware of the fallibility of how your mind works, you should have an easier time spotting when that fallibility is being exploited, intentionally or not.

Lots more detail in the article.
 
Apparently, LLMs are Artificial Psychics.

They operate in the same way that scammers who claim psychic abilities do.

https://softwarecrisis.dev/letters/llmentalist/

One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

There are two possible explanations for this effect:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.
Many AI critics, including myself, are firmly in the second camp. It’s why I titled my book on the risks of generative “AI” The Intelligence Illusion.

For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion.

I now believe that there is even less intelligence and reasoning in these LLMs than I thought before.

Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.

Falling for this statistical illusion is easy. It has nothing to do with your intelligence or even your gullibility. It’s your brain working against you. Most of the time conversations are collaborative and personal, so your mind is optimised for finding meaning in what is said under those circumstances. If you also want to believe, whether it’s in psychics or in AGI, your mind will helpfully find reasons to believe in the conversation you’re having.

Once you’re so deep into it that you’ve done a press tour and committed yourself as a public figure to this idea, dislodging the belief that we now have a proto-AGI becomes impossible. Much like a scientist publicly stating that they believe in a particular psychic, their self-image becomes intertwined with their belief in that psychic. Any dismissal of the phenomenon will feel to them like a personal attack.

The psychic’s con is a mechanism that has been extraordinarily successful at fooling people over the years. It works.

The best defence is to respond the same way as you would to a convincing psychic’s reading: “That’s a neat trick, I wonder how they pulled it off?”

Well, now you know.

Once you’re aware of the fallibility of how your mind works, you should have an easier time spotting when that fallibility is being exploited, intentionally or not.

Lots more detail in the article.
Given that the article doesn't set a goalpost for what 'thinking' and 'understanding' are, it's a pretty bold claim that LLMs don't or can't do that.

If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.
 
Apparently, LLMs are Artificial Psychics.

They operate in the same way that scammers who claim psychic abilities do.

https://softwarecrisis.dev/letters/llmentalist/

One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

There are two possible explanations for this effect:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.
Many AI critics, including myself, are firmly in the second camp. It’s why I titled my book on the risks of generative “AI” The Intelligence Illusion.

For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion.

I now believe that there is even less intelligence and reasoning in these LLMs than I thought before.

Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.

Falling for this statistical illusion is easy. It has nothing to do with your intelligence or even your gullibility. It’s your brain working against you. Most of the time conversations are collaborative and personal, so your mind is optimised for finding meaning in what is said under those circumstances. If you also want to believe, whether it’s in psychics or in AGI, your mind will helpfully find reasons to believe in the conversation you’re having.

Once you’re so deep into it that you’ve done a press tour and committed yourself as a public figure to this idea, dislodging the belief that we now have a proto-AGI becomes impossible. Much like a scientist publicly stating that they believe in a particular psychic, their self-image becomes intertwined with their belief in that psychic. Any dismissal of the phenomenon will feel to them like a personal attack.

The psychic’s con is a mechanism that has been extraordinarily successful at fooling people over the years. It works.

The best defence is to respond the same way as you would to a convincing psychic’s reading: “That’s a neat trick, I wonder how they pulled it off?”

Well, now you know.

Once you’re aware of the fallibility of how your mind works, you should have an easier time spotting when that fallibility is being exploited, intentionally or not.

Lots more detail in the article.
Given that the article doesn't set a goalpost for what 'thinking' and 'understanding' are, it's a pretty bold claim that LLMs don't or can't do that.
Sure. But it's not a bold claim when you grasp that the way LLMs work precludes thinking as an option.
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.
Sure. Call me when you can do that. Right now, you can't.
The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.
Indeed. But again, your imagining abilities these systems don't have is not evidence that they currently have these abilities, or that they ever will.
 
Sure. But it's not a bold claim when you grasp that the way LLMs work precludes thinking as an option
No, it's a statement, without evidence, that the way LLMs work is not already "thinking".

It's absolutely a bold claim when the person making it doesn't even know how to start with defining "thinking". It's hard to preclude something when you don't know what it is.

I do actually understand how LLMs work... And how "thinking" works. LLMs think.
Sure. Call me when you can do that. Right now, you can't
You might imagine that I actually picked an example for a very specific reason: a humanoid Robot driven by ChatGPT was one of the first examples of the robot's ability to "understand". Please don't make me waste my time digging it up.

Language is capable of describing 100% of software engineering problems as word problems

Literally any task can be described in terms of a software engineering problem.

These systems are capable of using language in a general way, to include solving problems as software engineering problems.

It's kind of silly, then, to see something capable of solving word problems be declared to be something that *doesn't* understand when being able to solve a word problem is literally the proof we use for whether humans understand math, a type of problem that can be used to solve on any question.
 
Last edited:
...
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.

Voice-operated machinery existed a long time before LLMs came on the scene. The "reification of behavior" is absolutely not an indication that it understands language, only that it is programmed to respond to a limited set of verbal cues. Since you have not studied what it means to "understand" a linguistic expression, you are easily impressed by simulated acts of responses to verbal cues.

See: The Octopus Test for Large Language Model AIs
 
...
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.

Voice-operated machinery existed a long time before LLMs came on the scene. The "reification of behavior" is absolutely not an indication that it understands language, only that it is programmed to respond to a limited set of verbal cues. Since you have not studied what it means to "understand" a linguistic expression, you are easily impressed by simulated acts of responses to verbal cues.

See: The Octopus Test for Large Language Model AIs
I see you replied to one part while ignoring the whole *it fucking solves word problems* part.
 
...
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.

Voice-operated machinery existed a long time before LLMs came on the scene. The "reification of behavior" is absolutely not an indication that it understands language, only that it is programmed to respond to a limited set of verbal cues. Since you have not studied what it means to "understand" a linguistic expression, you are easily impressed by simulated acts of responses to verbal cues.

See: The Octopus Test for Large Language Model AIs
I see you replied to one part while ignoring the whole *it fucking solves word problems* part.

I reacted to the part of your reply to bilby that I think you got wrong--your "reification of language behavior" claim. Mimicry is intelligent responses to human expressions is not the same thing as language understanding. The Octopus Test link explains why. Perhaps it would help if you reflected on the nature of reification fallacies. An anthropomorphic fallacy (aka pathetic fallacy) is a type of reification fallacy.
 
...
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.

Voice-operated machinery existed a long time before LLMs came on the scene. The "reification of behavior" is absolutely not an indication that it understands language, only that it is programmed to respond to a limited set of verbal cues. Since you have not studied what it means to "understand" a linguistic expression, you are easily impressed by simulated acts of responses to verbal cues.

See: The Octopus Test for Large Language Model AIs
I see you replied to one part while ignoring the whole *it fucking solves word problems* part.

I reacted to the part of your reply to bilby that I think you got wrong--your "reification of language behavior" claim. Mimicry is intelligent responses to human expressions is not the same thing as language understanding. The Octopus Test link explains why. Perhaps it would help if you reflected on the nature of reification fallacies. An anthropomorphic fallacy (aka pathetic fallacy) is a type of reification fallacy.
Interestingly, this is addressed in the post that began this exchange:

Much like a scientist publicly stating that they believe in a particular psychic, their self-image becomes intertwined with their belief in that psychic. Any dismissal of the phenomenon will feel to them like a personal attack.
 
...
If I can say "load the dishwasher" and the LLM powered humanoid robot successfully starts loading the dishwasher, I think the point is moot, especially if it's never loaded a dishwasher before.

The reification of the language to behavior says that it understood the language.

If it makes a mistake, and I say "did you make a mistake" and it says "yes" and I say "what mistake did you make" and it says "I forgot to add the soap", I think damn well it might display some "understanding", too.

Voice-operated machinery existed a long time before LLMs came on the scene. The "reification of behavior" is absolutely not an indication that it understands language, only that it is programmed to respond to a limited set of verbal cues. Since you have not studied what it means to "understand" a linguistic expression, you are easily impressed by simulated acts of responses to verbal cues.

See: The Octopus Test for Large Language Model AIs
I see you replied to one part while ignoring the whole *it fucking solves word problems* part.

I reacted to the part of your reply to bilby that I think you got wrong--your "reification of language behavior" claim. Mimicry is intelligent responses to human expressions is not the same thing as language understanding. The Octopus Test link explains why. Perhaps it would help if you reflected on the nature of reification fallacies. An anthropomorphic fallacy (aka pathetic fallacy) is a type of reification fallacy.
Interestingly, this is addressed in the post that began this exchange:

Much like a scientist publicly stating that they believe in a particular psychic, their self-image becomes intertwined with their belief in that psychic. Any dismissal of the phenomenon will feel to them like a personal attack.
As I have said, it's a no-true-scotsman to call *actually doing the task* "mere mimicry".

At some point you have to accept that regardless of what actually happens inside the box, observing the box yields the expected results: it solves the word problems.

You both commit *anthropocentric* fallacies. Specifically the fallacy of tying some thing to some specific reification, the reification that humans engender specifically, here.

I have not made any appeal to human-like-ness but rather that humans have something-like-ness that AI also happens to have.

These are not the same thing.

My assertion is that humans are 100% "linguistic". The human activity, and in fact the activity of all agents, is to develop vocabularies about how things are same and different.

Most life, including humans, additionally have structures built up inside them which share parity to the vocabularies we build, and when these resonate against some rhythm of access to our internal token structures, we have for us assigned some evolved intuition on the subject that interconnects to much of our other innate vocabularies.

Everything from the statements made by people as they walk in a cluster, to our facial expressions, to the specific neurons being excited by looking at a thing, we have vocabularies which connect shapes and patterns of sensation to some much higher dimensional vector representation of that data, wherein the vector is composed of some pattern-component and strength along a wide cross section of neurons.

This itself forms a sort of descriptive vocabulary within the neural system at various parts.

But at that point, there's nothing differentiating this kind of information and interaction from what happens in a transformer model with attention, albeit I expect that much of the process is very wasteful because of the ridiculous ways pseudo-recursion and pseudo-self-review would have to function in such a system.

We both act as information integrators, and the information we integrate is ultimately broken down into vectors and features, for all we also evidently get a live corrected feed of at least some subset of the sensory surface itself as well... Not that transformer systems would lack that either.

I see no mechanical reason to think that they lack some capability of "understanding", seeing that how humans accomplish "understanding" is very similar... But rather than having evolution beat those structures into us over eons of trial and error, the vocabulary structures were beaten into it by generations of gradient descent processes applied liberally on token streams until vector representations precipitated, and they are given things like tokenizers and CLIP models for extracting and packing vector representations to and from tokens or features.

Understanding happens in the transformations on the underlying vector representations of the tokens or concepts or whatever you want to call it. That's where it's going to happen, if anywhere, by having a model which forces response to conform to some logic on the input space rather than a lookup table. That's what innate understanding is, the same way a dog knows how to catch a ball despite the fact that knowing where to go involves some manner of calculus. Explicit understanding is just when you can describe in mathematical terms why...

But we don't expect humans to have explicit understanding in the first place to grant them "understanding". Explicit understanding even among humans is rare.

We can even see them reason it out, count out the right number of Rs in Strawberry, and then just like a human fall back on some wrong understanding and completely disregarding the evidence of their actions and output of their reasoning in favor of the thing they "feel" is most correct (I love seeing ChatGPT transcripts where it's asked to count Rs in Strawberry, gets the right answer through careful process and shown work, and then throws the 3 away for the "intuition" it has that there are 2).
 
Jahryn, your wall of text does not even touch on the Octopus Test criticism of your position. The basic problem with it is that to understand a text, one needs to have the experiences that "stand under" the thoughts being conveyed by the signal. Working with the signal alone is insufficient, because the signal alone does not contain or convey all the information needed to understand what it is about. That is why a signal processing (information processing) approach to language understanding simply ends up being a very fancy set of transformations rather than true representation of the meanings of words. Language itself needs those experiences to enable our species to pass thoughts back and forth. That's why the same thoughts can be conveyed in more than one medium of expression (fundamentally different types of signals--touch, vision, hearing, etc.). If you want to understand what it means to understand, look into embodied cognition. That could help you to understand the point of the Octopus Test.

We've been over this before, but I fear you've invested too much time and effort in defending your reification fallacy to move beyond confusing simulation with the phenomenon being simulated.
 
Last edited:
  • Like
Reactions: WAB
These systems are capable of using language in a general way, to include solving problems as software engineering problems.
The claim is that LLM can do what junior programmers can. Problem is, juniors are useless as far as programming itself goes, they are needed only as a source for future middle level programmers. And since there is no reason to believe that ChatGPT would graduate to middle level any time soon.....
 
Last edited:
These systems are capable of using language in a general way, to include solving problems as software engineering problems.
The claim is that LLM can do what junior programmers can. Problem is, juniors are useless as far as programming itself goes, they are needed only as a source for future middle level programmers. And since there is no reason to believe that ChatGPT would graduate to middle level any time soon.....
Well, that's your claim and we all know how much to trust your "credulity".

"Photo editing is just low resolution pixel editing and will never amount to creating truly rich art"...

We have seen similar claims to yours: most of those junior programmers have been working at it for more years as a junior than LLM models have existed in reality, and have made nowhere near that much progress on their educations...

It's almost as if AI can *learn*. Which was entirely the point of it in the first place.
 
We have seen similar claims to yours: most of those junior programmers have been working at it for more years as a junior than LLM models have existed in reality, and have made nowhere near that much progress on their educations...
Bull-crap. The truth is, LLM does not even get to junior level. The (highly optimistic claim) was that they can replace juniors, not that they are are equal.
 
We have seen similar claims to yours: most of those junior programmers have been working at it for more years as a junior than LLM models have existed in reality, and have made nowhere near that much progress on their educations...
Bull-crap. The truth is, LLM does not even get to junior level. The (highly optimistic claim) was that they can replace juniors, not that they are are equal.
You really aren't keeping up with the cutting edge at all, are you?
 
We have seen similar claims to yours: most of those junior programmers have been working at it for more years as a junior than LLM models have existed in reality, and have made nowhere near that much progress on their educations...
Bull-crap. The truth is, LLM does not even get to junior level. The (highly optimistic claim) was that they can replace juniors, not that they are are equal.
You really aren't keeping up with the cutting edge at all, are you?
Did I miss LLM replacing Senior programmers?
 
We have seen similar claims to yours: most of those junior programmers have been working at it for more years as a junior than LLM models have existed in reality, and have made nowhere near that much progress on their educations...
Bull-crap. The truth is, LLM does not even get to junior level. The (highly optimistic claim) was that they can replace juniors, not that they are are equal.
You really aren't keeping up with the cutting edge at all, are you?
Did I miss LLM replacing Senior programmers?
No, you just didn't actually make a valid argument. Some short years ago, CG appeared in animation, looking like sloppy shit, incapable of doing the job of even a kindergarten level animator.

Have you ever seen FF7's graphics from the original? I saw 18 year olds in Second Life pushing better animation, and FF7 was the work of a whole team for years.

Compare that to FF7 today, 20 years later, and no human alive is capable of manually matching the quality of the animation.

AI is a literal 5 year old. Could you program quite so well when you were 5? And it's only going to grow exponentially rather than linearly... And you're still just looking at last year's advancements, when this year has seen significant strides.

It's not even about how good or solid they are yet... But about the fact that there's no ceiling on their capabilities that you could demonstrate.
 
No, you just didn't actually make a valid argument. Some short years ago, CG appeared in animation, looking like sloppy shit,
Are you really making comparison between CG and AI?
AI is a literal 5 year old.
And it is still mostly unsubstantiated hype.
Seriously. It's been few years since they claimed that computer speech recognition is better than human. And yet, when you try to use it it clearly not better than human.
 
Back
Top Bottom