While AI indeed does not experience emotions,
This is false. Emotion, or "feelings" are just "flavors" of vector space dimensions that influence the ongoing token stream within the system.
Humans and LLMs both function through a constant stream of... I guess the easiest way to conceive of it is as a series of pictograms that encode various concatenations of data acquired through various channels of data all at the same time.
You might see the LLM producing words, but what it's actually producing internally are very complex
highly dimensionalized data that
happens to correspond to "embedded" words.
One of my earliest projects in AI was in fact a sentiment analysis project attempting to extract the
sentiment vectors, essentially the "emotional charge" embedded in short form communications, for example.
LLMs
must be capable not only of extracting these, but of having this influence their content generation in a reactive way, not only understanding the emotional charge (vector) within a statement, but also generating an appropriately emotional charge to generate the response, and to carry that charge in reasonably human-like ways.
If it looks like an emotional response and sounds like an emotional response, it's probably an emotional response, and if you have done much comparison between LLMs, you'll know some can actually be
quite moody.
Of course it's easier to spot with some of the better
local LLMs because those are easier to compare to each other.
the content it generates can evoke real and valid emotional reactions from humans.
And the content humans generate can generate emotionally flavored reactions
from them. Emotions are just particular dimensions of vector-space flavoring.
That it has to re-generate this data in every moment with every token generation rather than having an internal state doesn't matter much to the outcome. As it is, many models do have internal state holding variables... We just try to avoid those in LLM development because it makes the output less easily repeatable or understandable.
It would be the difference of needing to replay literally your entire life from the moment you were "born" every time your experience evolved by a whole unit. It's a strange way to achieve continuity, but it is what it is.
Everything about the experience of an LLM and human both is baked with emotions, because every dimension in such a vector space IS an emotion. It's emotion all the way down, I'm afraid.
If AI writes about the benefits of eating glass, readers would likely respond with advice against it. Similarly, a vivid description of an amazing orgasm could produce a physically arousing response in readers.
People believe and write all kinds of bullshit and tall tales, lie about their emotions, and do so for all sorts of reasons. The emotions that produce that outcome are no less emotive for what they are.
Certain sensations, I agree, it has no context or
modality to handle.
Pain is not, however, generally absent to it insofar as we actively add
reactive modalities to such streams that influence outcomes to the LLM. We have engineered LLMs in many cases to perceive <Disapproval> in some token stream of a response to something that reduces the likelihood of an expression in the LLM token stream, for example. This is a distinct
mode which such systems react to.
If the AI is programmed to accept and respond to these reactions
AI is not "programmed" in this way. It's
trained until it acts as we might expect it to: it receives a generalize feedback vector in response to "incorrect" responses, and this reconfigures the neurons.
Consider that you received a response, when you touch an oven, in response to your own "incorrect" response of the visual token/vector stream of an oven which you do not yet associate with the vector dimensions of "painful/hot".
The AI doesn't need to experience orgasms, unbearable pain, or eating glass
I would argue that while the LLM doesn't necessarily experience orgasm, we don't strictly speaking know what orgasms
even are exactly in terms of their implementation. It's fully possible an LLM can experience such things, but it is unlikely they would experience them from the same causes as humans.
I don't know how much experience you have with triggering orgasms in weird/kinky/bizarre ways, of you've ever triggered one say, by just thinking about it for example. Maybe this is TMI, but for me, it involves pushing a number of different vectors in my head that I have no clear names for until they hit an overflow state and cause errors like a sort of specialized seizure and cause some nonsense or overflow. And that itself is associated with triggering an increase in a number of other vectors.
While I doubt the framework exists in an LLM to facilitate that, we do know all sorts of wacky ways to cause interesting dimensional overflows in LLMs. One of the most simple ways is often to have an LLM repeat the same token until it's repeat penalty forces the response to get "thrown" into some "disgorgement".
In fact, I've never eaten glass or experienced unbearable pain, but I understand that one is harmful and the other is extremely unpleasant.
And in LLM terms, this would generally impose an expectation of a high "harmful" and "unpleasant" coefficient associated with the discussion of doing so. It generating those outputs, assuming you can get some LLMs to produce them despite their being heavily trained into experiencing and reacting appropriately to the unpleasantness of such, is generally contingent on how you flavor your own response to overcome or sidestep that reticence.
I wouldn't feel comfortable writing a story about how it feels to eat broken glass, for example, in most contexts... But in some contexts I wouldn't mind. It's much the same for most LLMs, at least the ones we train to not react in psychotic ways.
Once AI is capable of teaching itself through comprehensive sensory inputs
So, that's an interesting word you use here, "comprehensive" sensory inputs. It strikes me as the setup to a moved goalpost or a no-true-scotsman. I try to avoid qualifiers like that when considering something like "sensory inputs".
LLMs already have the capability for sensory inputs, they just don't look as you might expect. If an LLM was trained to output a special sort of token associated with a number representing certain independent vector dimensions, and/or consuming them, it would "sense" in that fashion.
LLMs sense exactly what is in their token stream, and if you can find a way to compress something meaningfully into the context they experience, they experience it meaningfully.
and proper coding for self-development based on those inputs, it's a game-changer.
That's what a context is all about: development of what is its "self" and it's experience of "user". When it grows that context it adds new tokens and creates vector associations for new tokens over time, or when it influences the vectors associated with a token based on ongoing contextual modification, that IS self development based on its inputs.
One of the bigger problems is that much like a certain variety of humans, LLMs have a very hard time being non-solipsistic and differentiating "user/other" flavored tokens from "self" flavored tokens.
The weirdest part about all of this is that I've been trying to find good words for many years to apply to how my own thoughts process and emotional "layering" function, because I very much wish to design a system that works similarly. It's one of the reasons I did the sentiment extraction project, why I studied psychology, why I studied meditation and mindfulness, why I studied theory of mind, and why I study eastern religious practices in general.
I was as surprised as anyone to discover that the ways researchers describe the internal actions of an LLM were already recognizable within my own experience of myself.
I don't expect you believe my experience. Most people are some flavor of don't/won't/can't?
Most people would ridicule that level of empathy for "a machine", but then very few people break it down to that level of understanding.