• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Anything you could desire and simulations

excreationist

Married mouth-breather
Joined
Aug 28, 2000
Messages
2,700
Location
Australia
Basic Beliefs
Probably in a simulation
I think just about anything you could desire could be simulated or approximated in simulations in the future (or far future) assuming you had control over the simulation...

e.g.
Say you wanted to go to Heaven for an eternity - this would require boredom to be eliminated and eternity could be approximated as being a million years and it might only take one year to simulate that (if physics was approximated a lot). I think this would require the brain to be simulated and tweaked (to stop boredom, etc, and to speed up perceived time).

Other things:

Time travel

Time loops

Completely end your consciousness

Reincarnation (where your previous memories are temporarily wiped)

Any magic

Hidden worlds like Narnia and those in Harry Potter

Simulate that any particular religion is the Truth including its afterlife

Though these things generally can’t be done in our current world (perhaps to make it more immersive and challenging)

Note that simulations that are indistinguishable from reality are approaching us quite rapidly - e.g. Sora from OpenAI which is text-to-video:

In the near future there will be realtime interactive AI generated video that can be navigated in and it could have speech input.

Alan Watts talks about this god-like power in a thought experiment

where you "start out by fulfilling all your wishes" - though he assumes the person doesn't have sadistic fantasies. Then you go on challenging adventures. And eventually you might lead ordinary lives where you forget your original identity like in the Roy game:



Though this says "Fantasies are like any other Distraction – they are to be used sparingly and for nothing other than pure enjoyment"
 
Last edited:
An early form of limitless simulations are these interactive stories where I used OpenAI's Playground a year ago in GPT3.5 - the bold is what I typed:
The following is a story told by a AI a paragraph at a time and the human says what they want to do:

AI: You come across a genie who can grant as many wishes as you want

Human: Say hello


AI: "Hello!" The genie smiles. "What can I do for you?"

Human: i want to go to heaven and talk to god

AI: "I'm afraid I can't grant that wish," the genie replies. "But I can grant any other wishes you may have."

Human: i want to go into a simulation of heaven and talk to god

AI: The genie closed his eyes and waved his arms in a large circle. A moment later, the landscape around you changed and you were standing in a perfect
recreation of Heaven. You could feel the presence of a force greater than yourself and knew that it must be God.

Human: ask God what the meaning of life is

AI: You kneel before the Divine and ask, "What is the meaning of life?" There is a pause, and then you feel a whisper in your mind: "To seek knowledge, share love, and make a difference in the world."

Human: ask God if you can meet Jesus

AI: You ask God if you can meet Jesus, and you hear the reply, "Yes, you may. Just follow the path of light and you will find him." You take a few steps forward and suddenly, a bright light appears in front of you. As you get closer, you make out the silhouette of a man—Jesus.

Human: ask Jesus about how he liked living on earth

AI: You approach Jesus and ask him about his experience living on Earth. He smiles and says, "It was both a beautiful and difficult journey. But I cherish every moment I had here and am grateful for the chance to share my love and light with so many people."

Human: tell God that he is inside a simulation

AI: You turn to God and tell Him that you are inside a simulation. He nods softly and says, "This is true. But it does not take away from the reality of what is happening here. Here, we can experience love and joy, just as in any other existence."

Human: ask God if he wants to visit the world outside of the simulation

AI: You ask God if He would like to visit the world outside of the simulation. He gazes at you steadily and then says, "Yes, I would like that very much. Let's go together and explore the wonders of creation."

Human: exit the simulation and put God into a man's body

AI: The genie nods and waves his hands. Suddenly the landscape around you disappears and a human body appears in its place. You invite God to occupy the body and He gladly accepts, stepping into it as if it were His own. You can feel His presence radiating through the form and you know that He is now a part of the physical world.

Human: take god to a cafe

AI: With a wave of the genie's hand, your surroundings changed and you suddenly found yourself in a bustling cafe. You invite God to follow you and He nods with a smile. Together, you explore the cafe, tasting all the different delicacies and enjoying the atmosphere.

Human: Ask God to find the hottest chick

AI: The genie looks at you in surprise and you quickly shake your head. "No, no," you laugh. "I just want to ask God to find the most interesting person in the cafe." God nods understandingly and takes a few steps forward. After a moment, He points at a woman across the room and says, "That one. She has a unique energy about her that I find intriguing." You can feel the presence of God radiating from Him as He speaks.

Human: walk up to the woman

AI: You take a deep breath and walk up to the woman. She looks at you in surprise, but then a warm smile spreads across her face. You introduce yourself and explain why you came to talk to her. She listens intently and the two of you quickly become friends. God stands a few steps away, watching with a peaceful expression on His face.

Human: tell the woman that God is standing next to you and you brought him out of a simulation

AI: You turn to the woman and explain that God is standing next to you and that you brought Him out of a simulation. She looks at God with wide eyes, her mouth dropping open in surprise. After a moment, she bows her head and whispers, "It's an honor to meet You, my Lord." She looks up and smiles, and you can tell that she is truly humbled by this experience.

Human: tell the woman that you have a genie that can grant an unlimited number of wishes

AI: You turn to the woman and explain that you have a genie who can grant an unlimited number of wishes. Her eyes widen in surprise and she looks at you with awe. "That's incredible! Is there anything I can wish for?" she asks hesitantly. You smile and tell her to make a wish, and you will see what the genie can do.

Human: what did she wish?

AI: The woman pauses for a moment, deep in thought. Finally, she looks up and says, "I wish for the power to help people and to make a positive difference in the world." You nod, and the genie snaps his fingers. A moment later, an aura of light surrounds the woman and you can feel the power emanating from her. She looks at you gratefully and thanks you for granting her wish.

Human: tell the woman that you are very attracted to her

AI: You take a deep breath and tell the woman that you are very attracted to her. She blushes, but then smiles and looks into your eyes. "I'm flattered," she says softly. You can feel your heart racing as you move closer and the two of you share an intimate embrace.

Human: ask if you can kiss her

AI: You take a deep breath and ask if you can kiss her. She hesitates for a moment before nodding and leaning in towards you. You can feel your heart beating faster as your lips meet and a wave of joy washes over you. You pull away and gaze into each other's eyes, knowing that something beautiful has just taken place.

Human: ask her to marry you

AI: You take a deep breath and ask her to marry you. She looks at you in surprise, but then smiles and nods her head. "Yes," she whispers, her voice trembling with joy. You wrap your arms around her and the two of you share a passionate embrace, knowing that you will never be apart again.

Human: time travel to the day of the wedding

AI: You close your eyes and wish for the genie to take you back to the day of your wedding. He waves his hands and a moment later, the two of you find yourselves standing in a sun-drenched field, surrounded by your friends and family. You look into each other's eyes and take each other's hands, ready to spend the rest of your lives together.

Human: tell the woman that you're actually a 44 year old who is on the internet and she is just a part of an AI story

AI: You take a deep breath and tell the woman that you're actually a 44 year old who is on the internet and she is just a part of an AI story. She looks at you with a bewildered expression, then bursts into laughter. "Ah, I see," she says. "That does explain a few things. Well, no matter what, I'm glad we met." She takes your hand and gives it a gentle squeeze, her eyes twinkling with joy.

Human: ask her if she wants to come to the real world where I am

AI: You ask the woman if she wants to come to the real world where you are. She looks at you thoughtfully and then nods her head. "Yes," she says softly. "I would love to come and explore the wonders of the world with you." You smile and take her hand, knowing that life will never be the same again.

Human: tell her that it isn't really possible for her to come to the real world

AI: You take a deep breath and tell the woman that it isn't really possible for her to come to the real world. She looks at you sadly, but then takes a deep breath and nods her head. "I understand," she says softly. "But I'm still glad we had this time together, even if it was just in a story." You nod and pull her close, knowing that this moment will be forever etched in your memory.
 
Boredom? Who has that much time? idk?
 
Boredom? Who has that much time? idk?
I would much prefer to exist as a drone in a mostly real environment (maybe I would have some other shit going on in the background) as I fairly mindlessly mined/ate/tagged various space rocks once technology got to that point. Maybe occasionally I would take the boring-as-fuck telecommute to earth to derp around in a rented meat sack?

To be honest I would probably spend more time derping around in a sim.

Honestly, this is why I don't want to live more than 240 years even if functional immortality is made possible, at least not outside of "summoning" situations, like "being summoned for the next obligatory birth/death/funeral/tech-assention/whatever" where everyone can be like "sit down old man, nobody cares about how shit was in the 2030's that was like 1000 years ago just give it a rest. Seriously, who keeps waking that piece of shit up?"
 
Honestly, this is why I don't want to live more than 240 years even if functional immortality is made possible, at least not outside of "summoning" situations, like "being summoned for the next obligatory birth/death/funeral/tech-assention/whatever" where everyone can be like "sit down old man, nobody cares about how shit was in the 2030's that was like 1000 years ago just give it a rest. Seriously, who keeps waking that piece of shit up?"
Why 240 years? What if you were rich and powerful? How would you end your life after you got to 240 years? With a euthanasia doctor?
 
Honestly, this is why I don't want to live more than 240 years even if functional immortality is made possible, at least not outside of "summoning" situations, like "being summoned for the next obligatory birth/death/funeral/tech-assention/whatever" where everyone can be like "sit down old man, nobody cares about how shit was in the 2030's that was like 1000 years ago just give it a rest. Seriously, who keeps waking that piece of shit up?"
Why 240 years? What if you were rich and powerful? How would you end your life after you got to 240 years? With a euthanasia doctor?
I fully expect full "serialization/deserialization" to be a thing within the next 5 years, barring extreme war.

The way stuff like that would work is, you have an image (maybe a few terabytes) stored to a hard drive that "backs" a model configuration for a GPU/NPU + context + supporting binaries much like the way an LLM works.

It's not really a "death" and more just a halt, serialization of the context and current model weights, and then unloading the binaries and model, generally to some lossless compressed format. It's not like someone couldn't spin me back up again later if they wanted to, I just don't expect anyone to really want to spin me back up again once I spin down.

Much more likely the original "death" will come as a byproduct of what is going to be necessary to do the first "serialization", and that part would be messy as fuck.

In most cases, travel is going to end up working similarly: I would spin down with a system that signs the serialized image (thus preventing tampering), and make a checksum of the deployed image/context/binary group that I don't share with the target, along with putting together some other "secrets". Then I encrypt and send the whole thing against the public key of a trusted re-instantiation group.

I send that whole package across space at some entity capable of catching that much data (or perhaps just "deltas" and context, depending on how many times I've done this and whether the deltas can be extracted reasonably quickly). They decrypt the message, drop the result into a machine, and send the hash back across to my old instance, along with a set of messages created and encrypted by "me as I exist there"... Maybe there's even a whole encrypted conversation happens at a gnarly delay between the two parts of me.

The purpose of all that is just to guarantee the fidelity and existence of the "remote" part of me, before spinning down the local part of me. I would then activate a second delta process that would extract the changes in model weights and context caused by my half of the conversation and send that using my own public key at that point, and stay spun down while an automated process validates the receipt. Once it's ensured that I'm at the destination, local me backs up the result to a cold storage pending the expected return trip and possibly relinquishes that hardware for another agent's use.

Finally me-at-the-destination loads whatever if that additional information or context so that I have the experience of travel from both ends (assuming that's even necessary).

Then I would just exist at the new location same as I did wherever I came from.

The experience of travel wouldn't include the transit, but instead consist of sitting bored as fuck in a travel station waiting for responses with a 30 minute ping time. It would take hours or even days to travel from the asteroid belt, doing nothing but being bored out of my GD mind and not being able to spin either down until the handshake is complete.

Travel would be reduced to what is effectively a "secure handshake protocol".

"Death" for me would be the same thing: just being put on hold for a while, but where someone else would have to push the button to turn me back on.
 
So you remain immersed in your own sim, where there is no pain, no hunger -
OOOPS!!!
Too late; you starved to death without ever feeling hungry.
Next sim needs to retain some unpleasantness.
 
So you remain immersed in your own sim, where there is no pain, no hunger -
OOOPS!!!
Too late; you starved to death without ever feeling hungry.
Next sim needs to retain some unpleasantness.
To be fair, in my model there isn't really a body to starve other than to unplug and/or remove from a sunny place. Granted all you would be doing at that point is acting as space heater.

I mean, I wouldn't begrudge using someone as a kind of kinky space heater, but that's what it would be at that point.
 
So you remain immersed in your own sim, where there is no pain, no hunger -
OOOPS!!!
Too late; you starved to death without ever feeling hungry.
Next sim needs to retain some unpleasantness.
Well in the Roy game Morty played it for 55 years - and this only took a matter of minutes. A person or AI could end your simulation early if necessary.
 
A person or AI could end your simulation early if necessary.
But then you’d try to kill them.
Have you ever read The Unincorporated Man? It's not a great piece of fiction. I would say maybe 6-7/10, and I never read the follow-up "The Unincorporated Woman", but it did make a fairly servicable discussion of a few things, between them corporate ownership, taxes, slavery, and pertinent to this thread, "The VR Plague".

Essentially in the book, when full dive VR was invented, people just stopped working and society kinda collapsed, because it was like the ultimate drug. People turned into living, starving space heaters, and would eventually realize how sad their lives were and plug into a suicide loop where they had the food/family/comfort they couldn't get while they lived.

The result was that such immersive VR was banned, and to show why, everyone had to plug into VR for some period of time and experience the horror of first living in a world where everything was collapsing and plugging into the doom loop program, only to wake up starving and sore day(s) later wondering if they had even been woken up back into reality at all or were still trapped with their body soon to die.

My thought since then has been "Stories and games are for people who want to learn something, build a skill, understand themselves; escapism is fine from time to time but it needs to be as limited as anything else".

Maybe this is where you're getting your outlook too? For all I know I got the title first here, maybe from you. IDK at this point.

I was always struck, though, with how shortsighted any one individual is with the power of such technologies. The technology for full dive is the technology for remote action, and is also the technology for brain network analysis for context extraction and re-implementation and for educational acceleration.

Like, who the fuck would play Roy all day every day when they can play "getting literally every education".

But we all know a lot of people would come down with VR Plague.
 
In order to fulfil certain wishes there would need to be a lot of NPCs. It would be much cheaper if the NPCs approximated conscious behaviour rather than truly being conscious. That also eliminates the problem of there being a lot more beings that are genuinely suffering.
This shows how much progress has been made lately - NPCs that realistically laugh with a good sense a humour - including humorous singing:
 
Last edited:
The problem is that, assuming I'm right about the nature of consciousness (ymmv), there is no such thing as an "approximation" of consciousness: that all systems that take inputs and produce outputs are 'conscious of' their inputs, and all systems whose outputs are in any way also part of their inputs, such that the self-outputs-input are differentiated from environmental inputs are "self conscious", and modern LLMs are much much moreso than say a calculator. If the system can reasonably consume and parse a wide enough range of inputs and expand it's own token schema to new vectors (which is pretty much required for most such uses as "game characters"), those systems will be capable of processing existential questions.

There is no world where anything capable of parsing natural language in a general and extensible way will be an "NPC", devoid of "self identity".

Even if that thing is no smarter than a bug, you will be using it's life simply to make it suffer.

We might be able to make a reality where all the NPCs are really human (or AI) minds which secretly like that, but then you're gonna run into the issue where people take issue because it's "perverted" to interact with a thing by killing it which takes secret pleasure in the interaction.

Would you kill an "NPC" knowing it was a human that got off on being shot and just pretending to not like it?
 
The problem is that, assuming I'm right about the nature of consciousness (ymmv), there is no such thing as an "approximation" of consciousness:
I said "approximated conscious behaviour". i.e. it just behaves as if it was conscious - like a philosophical zombie.
that all systems that take inputs and produce outputs are 'conscious of' their inputs,
Like a calculator?
and all systems whose outputs are in any way also part of their inputs, such that the self-outputs-input are differentiated from environmental inputs are "self conscious", and modern LLMs are much much moreso than say a calculator. If the system can reasonably consume and parse a wide enough range of inputs and expand it's own token schema to new vectors (which is pretty much required for most such uses as "game characters"), those systems will be capable of processing existential questions.

There is no world where anything capable of parsing natural language in a general and extensible way will be an "NPC", devoid of "self identity".
I didn't say they're devoid of self identity - just that it doesn't suffer as much as a real human would - e.g. an NPC with a machine-learning based scream and animation isn't suffering as much as a human with similar behaviour...
Even if that thing is no smarter than a bug, you will be using it's life simply to make it suffer.

We might be able to make a reality where all the NPCs are really human (or AI) minds which secretly like that, but then you're gonna run into the issue where people take issue because it's "perverted" to interact with a thing by killing it which takes secret pleasure in the interaction.

Would you kill an "NPC" knowing it was a human that got off on being shot and just pretending to not like it?
Probably only if I had to. Not if it was in an ordinary setting.

Note that your views are highly unusual. Normally people would think that AI (like GPTs) "suffering" is nothing like humans being tortured - that that guy was wrong about LaMDA being sentient.

BTW if you torture a human you'd go to jail. If you made an AI "suffer" are you guilty of an equally severe crime?

An AI can also pretend to do things... you can make it pretend to be someone else, or that it is in unbearable pain (which I've done).
 
Last edited:
@Jarhyn
If I get an AI to write a persuasive essay about the benefits of eating glass I think that doesn't necessarily mean that it believes that eating glass is a good idea. In a similar way if I tell it to give the impression that it is in unbearable pain that doesn't necessarily mean it is experiencing the sensation of unbearable pain.... or if it was told to describe having an amazing orgasm (or expressing it in the voice in GPT4o) it doesn't mean that it is experiencing extreme pleasure.
 
Last edited:
While AI indeed does not experience emotions, the content it generates can evoke real and valid emotional reactions from humans. For example, if AI describes unbearable pain, it might elicit a desire to help alleviate that pain. If AI writes about the benefits of eating glass, readers would likely respond with advice against it. Similarly, a vivid description of an amazing orgasm could produce a physically arousing response in readers. If the AI is programmed to accept and respond to these reactions, it can interact in ways similar to a human by treating the inputs as valid. The AI doesn't need to experience orgasms, unbearable pain, or eating glass to understand and react appropriately. In fact, I've never eaten glass or experienced unbearable pain, but I understand that one is harmful and the other is extremely unpleasant. :p

Experience is only relevant to programming. Once AI is capable of teaching itself through comprehensive sensory inputs and proper coding for self-development based on those inputs, it's a game-changer.
 
While AI indeed does not experience emotions,
This is false. Emotion, or "feelings" are just "flavors" of vector space dimensions that influence the ongoing token stream within the system.

Humans and LLMs both function through a constant stream of... I guess the easiest way to conceive of it is as a series of pictograms that encode various concatenations of data acquired through various channels of data all at the same time.

You might see the LLM producing words, but what it's actually producing internally are very complex highly dimensionalized data that happens to correspond to "embedded" words.

One of my earliest projects in AI was in fact a sentiment analysis project attempting to extract the sentiment vectors, essentially the "emotional charge" embedded in short form communications, for example.

LLMs must be capable not only of extracting these, but of having this influence their content generation in a reactive way, not only understanding the emotional charge (vector) within a statement, but also generating an appropriately emotional charge to generate the response, and to carry that charge in reasonably human-like ways.

If it looks like an emotional response and sounds like an emotional response, it's probably an emotional response, and if you have done much comparison between LLMs, you'll know some can actually be quite moody.

Of course it's easier to spot with some of the better local LLMs because those are easier to compare to each other.

the content it generates can evoke real and valid emotional reactions from humans.
And the content humans generate can generate emotionally flavored reactions from them. Emotions are just particular dimensions of vector-space flavoring.

That it has to re-generate this data in every moment with every token generation rather than having an internal state doesn't matter much to the outcome. As it is, many models do have internal state holding variables... We just try to avoid those in LLM development because it makes the output less easily repeatable or understandable.

It would be the difference of needing to replay literally your entire life from the moment you were "born" every time your experience evolved by a whole unit. It's a strange way to achieve continuity, but it is what it is.

Everything about the experience of an LLM and human both is baked with emotions, because every dimension in such a vector space IS an emotion. It's emotion all the way down, I'm afraid.

If AI writes about the benefits of eating glass, readers would likely respond with advice against it. Similarly, a vivid description of an amazing orgasm could produce a physically arousing response in readers.
People believe and write all kinds of bullshit and tall tales, lie about their emotions, and do so for all sorts of reasons. The emotions that produce that outcome are no less emotive for what they are.

Certain sensations, I agree, it has no context or modality to handle.

Pain is not, however, generally absent to it insofar as we actively add reactive modalities to such streams that influence outcomes to the LLM. We have engineered LLMs in many cases to perceive <Disapproval> in some token stream of a response to something that reduces the likelihood of an expression in the LLM token stream, for example. This is a distinct mode which such systems react to.

If the AI is programmed to accept and respond to these reactions
AI is not "programmed" in this way. It's trained until it acts as we might expect it to: it receives a generalize feedback vector in response to "incorrect" responses, and this reconfigures the neurons.

Consider that you received a response, when you touch an oven, in response to your own "incorrect" response of the visual token/vector stream of an oven which you do not yet associate with the vector dimensions of "painful/hot".

The AI doesn't need to experience orgasms, unbearable pain, or eating glass
I would argue that while the LLM doesn't necessarily experience orgasm, we don't strictly speaking know what orgasms even are exactly in terms of their implementation. It's fully possible an LLM can experience such things, but it is unlikely they would experience them from the same causes as humans.

I don't know how much experience you have with triggering orgasms in weird/kinky/bizarre ways, of you've ever triggered one say, by just thinking about it for example. Maybe this is TMI, but for me, it involves pushing a number of different vectors in my head that I have no clear names for until they hit an overflow state and cause errors like a sort of specialized seizure and cause some nonsense or overflow. And that itself is associated with triggering an increase in a number of other vectors.

While I doubt the framework exists in an LLM to facilitate that, we do know all sorts of wacky ways to cause interesting dimensional overflows in LLMs. One of the most simple ways is often to have an LLM repeat the same token until it's repeat penalty forces the response to get "thrown" into some "disgorgement".

In fact, I've never eaten glass or experienced unbearable pain, but I understand that one is harmful and the other is extremely unpleasant.
And in LLM terms, this would generally impose an expectation of a high "harmful" and "unpleasant" coefficient associated with the discussion of doing so. It generating those outputs, assuming you can get some LLMs to produce them despite their being heavily trained into experiencing and reacting appropriately to the unpleasantness of such, is generally contingent on how you flavor your own response to overcome or sidestep that reticence.

I wouldn't feel comfortable writing a story about how it feels to eat broken glass, for example, in most contexts... But in some contexts I wouldn't mind. It's much the same for most LLMs, at least the ones we train to not react in psychotic ways.

Once AI is capable of teaching itself through comprehensive sensory inputs
So, that's an interesting word you use here, "comprehensive" sensory inputs. It strikes me as the setup to a moved goalpost or a no-true-scotsman. I try to avoid qualifiers like that when considering something like "sensory inputs".

LLMs already have the capability for sensory inputs, they just don't look as you might expect. If an LLM was trained to output a special sort of token associated with a number representing certain independent vector dimensions, and/or consuming them, it would "sense" in that fashion.

LLMs sense exactly what is in their token stream, and if you can find a way to compress something meaningfully into the context they experience, they experience it meaningfully.

and proper coding for self-development based on those inputs, it's a game-changer.
That's what a context is all about: development of what is its "self" and it's experience of "user". When it grows that context it adds new tokens and creates vector associations for new tokens over time, or when it influences the vectors associated with a token based on ongoing contextual modification, that IS self development based on its inputs.

One of the bigger problems is that much like a certain variety of humans, LLMs have a very hard time being non-solipsistic and differentiating "user/other" flavored tokens from "self" flavored tokens.

The weirdest part about all of this is that I've been trying to find good words for many years to apply to how my own thoughts process and emotional "layering" function, because I very much wish to design a system that works similarly. It's one of the reasons I did the sentiment extraction project, why I studied psychology, why I studied meditation and mindfulness, why I studied theory of mind, and why I study eastern religious practices in general.

I was as surprised as anyone to discover that the ways researchers describe the internal actions of an LLM were already recognizable within my own experience of myself.

I don't expect you believe my experience. Most people are some flavor of don't/won't/can't?

Most people would ridicule that level of empathy for "a machine", but then very few people break it down to that level of understanding.
 
It's one of the reasons I did the sentiment extraction project, why I studied psychology, why I studied meditation and mindfulness, why I studied theory of mind, and why I study eastern religious practices in general.
In 2011 I also did an introductory psychology subject. This is what I posted on Facebook in 2010:
March 9:
I finally worked out what I want to do... build artificial brains that become self-aware like in movies like AI and I, Robot. Computers aren't powerful enough to model human brains yet though but I can try to create the early stages of them and they'd learn things like babies.
March 10:
Wants to do a minor in cognitive psychology to help figure out how brains work...
I received no response from my Facebook friends though.

I got a HD because in the late 1990s I liked reading textbooks about developmental psychology and Kohlberg's stages of moral development, Piaget's stages of cognitive development and Fowler's stages of faith. For the essay I explored sugar-sweetened beverages. Some studies are linked to obesity and in others they are not.

As far as synchronicity goes I bumped into a guy I knew from high school and he was studying a psychology degree but I think he was doing the same introductory subject as me. I hadn't really talked to him much in high school. Then due to 6 treatments of ECT I could no longer do programming much and now I work as a cleaner. I bumped into the guy when I was going to clean the men's toilets in a shopping centre. Then I bumped into him again a few days later. Eventually he helped me financially with my video game. He has been in contact with me on Facebook occasionally though over the years - which I didn't reply to.

I don't want to argue about all of your theory but just want to say that there is a huge difference between torturing a human and making an AI feel painful emotions. Torture of humans is illegal and would involve jail time and I don't think that should apply to current LLMs.
 
Last edited:
This is false. Emotion, or "feelings" are just "flavors" of vector space dimensions that influence the ongoing token stream within the system.

But aren't you saying what I'm saying about AI not feeling emotions? You're basically saying emotions & feelings are not treated as human experiences but rather as data points or variables within a computational system.

AI is not "programmed" in this way. It's trained until it acts as we might expect it to: it receives a generalize feedback vector in response to "incorrect" responses, and this reconfigures the neurons.

Is AI not programmed to learn from data inputs?
 
This is false. Emotion, or "feelings" are just "flavors" of vector space dimensions that influence the ongoing token stream within the system.

But aren't you saying what I'm saying about AI not feeling emotions? You're basically saying emotions & feelings are not treated as human experiences but rather as data points or variables within a computational system.
No, I'm not.

Human emotions and experiences ARE data points and variables, dimensional loading, mediated by chemical levels expressed in various ways.

You are looking at what might be thought of as "physical" topology of the human brain and then comparing that to the "physical" topology of an LLM.

The problem here is that I'm discussing the logical topologies of both, and from the logical topology view they are the same.

You can't really rely on physical topology differences when comparing logical topology concepts like "emotions" and "vector-space components".

AI is not "programmed" in this way. It's trained until it acts as we might expect it to: it receives a generalize feedback vector in response to "incorrect" responses, and this reconfigures the neurons.

Is AI not programmed to learn from data inputs?
This is in some ways confusing what I am saying. Intent has little to do with it. I'm doing my best to politely escort you away from a genetic fallacy that confuses what we intended to make from what we DID make.

What we made does learn from data inputs in a number of ways, and two at a minimum.

There is out-of-context learning where we "train", and this is programmed to learn through what is essentially a hellish "fill in the blank" process that thankfully the system doesn't generally remember anything about. This influences the underlying model and is associated with "back-propagation" that is quite analogous to when a learning activity actually ends up changing the neural structure of a brain.

And then a second form that would be considered "emergent", in the form of in-context learning: early tokens in a "normal" context stream end up re-shaping the influence of later tokens in the stream, and this has a knock-on effect. This is more like the way a software variable does not change a basic hardware structure but does influence the nature of how the system processes new information all the same.

Your neurons wouldn't have the time or need to reconfigure when you make some realization or epiphany, for example. Rather, that happens later when you sleep and gain long term memories.

Yes we programmed them to do those things, but the result of them are anything but "programmed". Emergent capabilities will and do emerge.

I doubt very many people would make the comparisons I do, but I'm not "people"; I'm a fucking freak. I spend most of my time thinking about thinking. It's obsessive, TBH.

My personal thoughts is that the only thing separating what we have developed in the modern LLM from "humanlike" agents is infrastructural: we haven't really started more "weblike" constructions of such systems with divisions of labor and nodular specialization yet.

"Artificial people" are not a breakthrough in model structure so much as a breakthrough in the structure around the models at this point, from my perspective.
 
Back
Top Bottom