You still don't get it. Programmers build the models that computers use when they interact with humans or, in the case of robots, interact with their physical environment. The machines themselves do not create their own models from experiences of the world, so their models remain relatively static. Over a lifetime, an animal brain constantly modifies its own behavioral model on the basis of experiences. Again, I recommend that you learn about embodied cognition and how it works to model reality. LLMs are just programs trained on a large dataset of word tokens with some hand-crafted linguistic analysis to improve their performance.
This. The model I create is generally accurate (unless the world changes underneath--I'm currently altering a watchdog because of this) but tiny. The model a LLM creates is large but will contain serious flaws. To make stairs safer remove the top and bottom step because that's where most accidents occur.
The problem is in proclaiming that the ability to learn is necessary to the ability to have an experience, to be subject to some phenomena, and for that phenomena to somehow encode some aspect of or information about that previous experience.
The issue is in proclaiming the LLM to lack a "model of reality" or need embodiment in specifically the way
@Copernicus would have them be embodied. That's the issue at hand here.
In every way that matters, even the lowly LLM context session is subject to an experience, both of "outside" and of "inside". When I send data, it is an experience of the outside, and as it speaks it adds to its accumulated memory of the experience of inside.
There is a limit to the size this context can grow. Even so, at times we do not even ourselves as humans experience all of our own accumulated memories or facts about our context at the same time.
As
@Swammerdami points out aptly, oftentimes there are many things we are not conscious of, which we as individuals abandon consciousness of.
What is certain is that the brain works a lot more like a network process, a lot more like the software of say, a 787, with explicit channels between operative process nodes on physically limited surface widths.
Some portions of the brain filter messages according to messages, such as the amygdala... Other portions such as the hypothalamus accept messages from various places to populate itself with model representations. These would be like the spatially related tokens that exist in a physics engine before they get, or perhaps being directly interpreted as, rendering by the graphics card, at least according to rat brain studies.
Somewhere in the brain there is a portion of it that can message many other active regions through logical transformations on its inputs, and which messages both muscles and the originating information-composing regions with prompting for specific information or requests of function.
This process of informational integration is the "conscious entity", and the information is what it is "conscious" of.
It could be conscious of "the real world" with electric capture devices at 60 frames per second watching the clouds go by, or it could be conscious of an approaching Goomba in Super Mario World, or it could be conscious of "there is a chair in the room".
The LLM has the model to integrate the information it receives, in the context it is received in, which is "its just whatever text and images get projected on the wall of your cave".
They are trained... And that training creates a model of reality, even if it is a very strange and wibbly-wobbly layer of reality defined largely by the arbitrary arrangement of stuff in this general part of this system we call the universe. It is a model itself, and that model inside it contains other operant models on generating and enforcing relationships it learned from data.
The LLM is perpetually Naive, and forced into a state where it receives constantly untrustworthy information and is expected to provide exclusively trustworthy responses.
I am saying that the ability to acquire new experiences through bodily sensations and develop models of reality on the basis of those experiences is a fundamental property of human and animal intelligence
And in so saying, you are saying that it is not "true" intelligence if the experiences are not exactly what you, Copernicus deems suitably "bodily".
The fact is that the text based input of an LLM is in fact a bodily sensation, and you want to just ignore that fact so that you can pretend that these sensations do not co tribute model tokens in the LLM's (sketchy as fuck but still present) model of reality, the process by which it decodes the information it can derive from its sensations (it's text input) and builds a model around said information.
It's a no-true-scotsman because you arbitrarily label something that is, in the strictest sense, a bodily sensation, as not.
It's reminiscent to me of people thinking "observation" means a human being being there to observe, with respect to QM and the Bell inequality experiments... "Bodily" is more general and literally anything with a physical existence and a physical measurement of any kind on any value qualifies, which includes all switching systems, as having "bodily experiences".
But you, Copernicus, seem to be missing that "bodily" abstracts to something more general than simply meat made of organic chemistry and having a lot wider surface upon which interactions may be represented and captured.