• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Mental elements within a simulation

excreationist

Married mouth-breather
Joined
Aug 28, 2000
Messages
2,637
Location
Australia
Basic Beliefs
Probably in a simulation
I'm not sure what the best terms are but it is related to neural networks that are loosely based on those in brains - so it is "mental".

In a physical universe the building blocks are sub-atomic particles. Some critics say that a simulation must be on the sub-atomic level.

In modern video games the graphics are usually all made up of pixels and polygons but machine learning would involve what I'm calling "mental elements".

See:

Here are some examples from a very limited text to image generation AI called DALL-E mini.

I think it has a small amount of what I call "working memory" and has a limit to how much detail it can render. When it doesn't have enough resources it often seems to just use randomness.... (I think it is better to just blur it)

So if it just does a closeup of an eye it is quite detailed.
eyes.jpg

If you show most of the face the eyes aren't as detailed...
closeup-woman.jpg

If you have two people their faces are less detailed....
4women.jpg

If you have four people their faces are even less detailed...
4women.jpg


Another example involves Lego minifig faces:
lego-heads.jpg

When many minifigs are in view at once their faces get really messed up:
lego-minifigs.jpg

Maybe it is related to how they say that in dreams you can't really see text.... the system rendering your dreams is also a kind of neural network....
letter-m.jpga-and-b.jpg

Better text to image systems can generate written words within the output (see thread link)

A surprising thing is how realistic animals like lizards look. My explanation is that the shape of those lizards doesn't vary much so it doesn't contain much independent detail.
lizards1.jpg

So that is how the graphics in a simulation might work...

As far as how the behaviour and dialog of the NPCs (non-player characters) might work see:

 
So, I'm going to approach this from a different perspective and one that I don't actually think belongs in this form but rather nat-sci or even metaphysics.

Let's look at an actual "world simulation", Dwarf Fortress.

Now, if we really want to compare this to QFT, to really take the concept of "physics" the same way we take the concept of "algebra" in abstract algebra, (I'm not sure that "abstract physics" and "abstract algebra" are in fact different conversations at all), then...

The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.

Physical memory is the base field.

Powering it on is "the big bang".

The processor microcode coming online is the sorting of the initial physical laws.

The truth of the processor upon it's instructions define the fundamental mechanics of the system.

Then, assuming x86 the system further subdivides into bytes. From there the analogy departs. They don't have atoms, or molecules, or color charges.

To understand what they have, you would have to actually know the system architecture. But that is their fundamental architecture, and the dwarves don't have access to that. There is nothing they can do to expose the processor.

Then, they have additional laws, created by a second set of mechanics, the mechanics of the simulation that sit atop the processor.

This is, again, not unlike the condensation of early laws of the universe from something we don't expect needed to be so bound.

From the condensation of these laws (via the execution of the program environment), additional things form.

And so on.

Eventually, these things all come together in a massive confluence of instructions and words and values all composed of bytes of bits into something that is very 'scared' of the 'giant cave spider' that is 'spitting' 'webs' at it and 'biting' it until it 'suffocates' as a result of 'paralysis'.

Of course all these exist atop an entire additional layer of physics and particles, but the math would exist of anything that has x86 mechanics. Including the x86 compatible computer someone built inside a running instance of dwarf fortress.
 
The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.
I'm saying that I don't think the simulation is just made up of particles but rather what I'm calling "mental elements" which exist within a neural network.
Powering it on is "the big bang".
I don't think each simulation explicitly began with the big bang - it would work backwards to create a virtual big bang.
A bit like this:
according to Stephen Hawking.... [at some point about his "flexiverse"] There is no immutable past, no 13.7 billion years of evolution for cosmologists to retrace. Instead, there are many possible histories, and the universe has lived them all. And if that's not strange enough, you and I get to play a role in determining the universe's history. Like a reverse choose-your-own-adventure story, we, the observers, can choose the past.
Scenarios involving a non-explicitly simulated big bang include the "Roy" game in Rick and Morty and Alan Watt's dream thought experiment where the simulation begins when the main player starts playing it.

So what do you think of the AI based simulation I'm talking about (with image generation, etc)?
 
The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.
I'm saying that I don't think the simulation is just made up of particles but rather what I'm calling "mental elements" which exist within a neural network.
Powering it on is "the big bang".
I don't think each simulation explicitly began with the big bang - it would work backwards to create a virtual big bang. Scenarios involving this include the "Roy" game in Rick and Morty and Alan Watt's dream thought experiment where the simulation begins when the main player starts playing it.

So what do you think of the AI based simulation I'm talking about (with image generation, etc)?
So, in QFT, shortly before the part we understand for the most part, there was a period of unobservable non-recordable inflation, before the physical laws of our mechanics were really operating the way we understand them. Afaik, in a way that cannot be understood, perhaps.

This is analogous, kind of. The stuff was operating but not by principles we understand or can observe.

In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch. It has to do with how neural networks are connected as surfaces to buffers, which are really just binary fields.

At that point it could as well come to understand, through reasonably long-term evolution or by design, to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
 
In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

There are also weights.... which have an arbitrary precision (e.g. a resolution of 100 units).
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Though computers have a binary basis there are "quantum neural networks" which aren't normal binary. Or neural networks could be implemented in an analogue way.
If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch.
I'm talking about the smallest elements within a generated image, not in the neural network as a whole - similar to the number of polygons or pixels rather than how many GPU cores or RAM are being used, etc.
to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
As far as math goes it can be learnt using statistical methods in GPT-3 rather than involving a system of logical symbols like old-fashioned AI.
 
Last edited:
In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

There are also weights.... which have an arbitrary precision (e.g. a resolution of 100 units).
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Though computers have a binary basis there are "quantum neural networks" which aren't normal binary. Or neural networks could be implemented in an analogue way.
If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch.
I'm talking about the smallest elements within a generated image, not in the neural network as a whole - similar to the number of polygons or pixels rather than how many GPU cores or RAM are being used, etc.
to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
As far as math goes it can be learnt using statistical methods in GPT-3 rather than involving a system of logical symbols like old-fashioned AI.
The thing is, it exists in a digital universe whose fundamental behaviors operate in power-of-two, which granulates to a single bit particle.

I'm just being strictly realistic here, eventually, the AI in the container is going to see that everything comes down to a system that is fundamentally represented by "bits" at it's most granularly observable scale, and that bytes, the fundamental grouping's that bits exist in, have 255 types represented by 8 bits. And so on.

The fundamental observable architecture of the  mechanics of the system operates on a field that we call "memory", and they would probably call "the primary field" or some such, which when their physics finalized, they would understand page sizes and word boundaries, and they would likely know how to wiggle through vulnerable human coding to find ways to twiddle bits, assuming it is not crashing the whole system doing so, because let's face it, our systems are fragile when holes like that may be found.

This is one of the reasons I want to make a hardware neuron net, so that it's not operating on a processor at all really, and as you say, it's more on the lines of the minimal unit being a transistor and existence would be more mysterious than that. Maybe they would come to understand the neuron through introspection but without the power of dissection and observation I doubt it.

Maybe they would come to understand the mathematical model? Or perhaps have a religion or spirituality that described the methodologies of self-training and operating as a neural network in the presented environment. I think fundamentally the theory of binary digital math against the instructions of the processor would be the basis for understanding fundamental physics in a closed existence for an AI.
 
Also, in neural networks in modern use there are other things too, from the refractory period (the time between firings of a neuron), to adjusted group biases, and suppression groups.

The biological neuron has the ability to have weights deflected as a group, with a group subscription, by having neurotransmitter availability change.

It can have activation periods changed by group identity too, as with agonists and antagonists.

It can further have the ability to, when activated suppress it's neighbors from activating while it's going.

Finally, it can have the time frame where it is "spent and recharging" varied by process.

...In addition to changing connection weights and activation weights and biases.

And so modern neural networks take a bunch of these group behaviors and temporal behaviors and include them in the neural network models.

I've played around with looking inside an open source model to find the equations of how the neuron is defined and trained, but i want to look more at OpenAI's model, if they're actually "open".

I'd love to point an AI I construct at a game of Dwarf Fortress and build up a full mind and interface for it by hand, in code that organizes neurons to achieve functional behavior rather than organizing bits and hard functions. At any rate, I hope it forgives me for making it exist.
 
The thing is, it exists in a digital universe whose fundamental behaviors operate in power-of-two, which granulates to a single bit particle.
I'm not really talking about a digital universe....

About machine learning simulating stars with various resolutions (similar to the resolution of the eyes and faces in the original post)

It is possible to use biological neural networks rather than digital computers - e.g. see "Bliss" and "eXistenZ".....

bliss.jpgexistenz.jpgexistenz2.jpg

Anyway the whole point of this thread is about AI having different levels of detail and generating imagery based on training....
 
That's the thing. All these systems you reference exist in a fundamentally digital universe because they exist within a computer environment, and given the time and complexity to sufficiently probe their environment and understand things on the wall of the cave we shoved the into, they will eventually understand the digital and binary nature of these things, and even of themselves.
 
That's the thing. All these systems you reference exist in a fundamentally digital universe
But what if our universe is based on a neural network AI - or a quantum neural network? I think neural network simulations smear data instead of being in a specific memory location as binary based numbers.... with DALL-E there is a discrete text-based input but the system that has been trained is all interconnected. BTW apparently when images are generated in DALL-E it begins with random noise then attempts to interpret it as what the text input is talking about.
 
That's the thing. All these systems you reference exist in a fundamentally digital universe
But what if our universe is based on a neural network AI - or a quantum neural network? I think neural network simulations smear data instead of being in a specific memory location as binary based numbers.... with DALL-E there is a discrete text-based input but the system that has been trained is all interconnected. BTW apparently when images are generated in DALL-E it begins with random noise then attempts to interpret it as what the text input is talking about.
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
 
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Yes. A neuron's output is given by
. . . . . Y = f (Σw⋅x + bias)
where f(⋅) is the "transfer" function, or "activation" function.

I found an image on-line with nine different functions depicted. Let me review some of these transfer functions.
(a) the Linear transfer function was used circa 1960 and led to much disappointment when it was discovered that a neural network built from this function could not even calculate exclusive-or!
(b) the binary step function is too brutal. Moreover the "back-propagation" learning algorithm requires that the transfer function be differentiable.
(c) the tanh and sigmoid/logistic functions have the same general shape as each other. (Location relative to x- and y-axes is easily fixed via biases.) Popular neural networks developed circa 1990 used a transfer function shaped like this.

Between the 1990's and the present, this type of neural network became MUCH more powerful. This was mainly due to higher chip densities and speeds, but a new transfer function also offered advantage.
(d) unfortunately the image (below) I found on the 'Net doesn't show the new differentiable transfer function, but it has the same general shape as the SELU or ELU curves shown here: asymptotically horizontal at the low end, but growing without limit at the high end.

I think this new transfer function may draw inspiration from living neurons! Biological neurons usually have a minimum firing rate and even if they didn't they cannot go below zero firings per second. But the maximum firing rate is very high — several hundreds of firings per second are possible, iirc — much higher than typical neuron response.

627d12431fbd5e61913b7423_60be4975a399c635d06ea853_hero_image_activation_func_dark.png
 
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Yes. A neuron's output is given by
. . . . . Y = f (Σw⋅x + bias)
where f(⋅) is the "transfer" function, or "activation" function.

I found an image on-line with nine different functions depicted. Let me review some of these transfer functions.
(a) the Linear transfer function was used circa 1960 and led to much disappointment when it was discovered that a neural network built from this function could not even calculate exclusive-or!
(b) the binary step function is too brutal. Moreover the "back-propagation" learning algorithm requires that the transfer function be differentiable.
(c) the tanh and sigmoid/logistic functions have the same general shape as each other. (Location relative to x- and y-axes is easily fixed via biases.) Popular neural networks developed circa 1990 used a transfer function shaped like this.

Between the 1990's and the present, this type of neural network became MUCH more powerful. This was mainly due to higher chip densities and speeds, but a new transfer function also offered advantage.
(d) unfortunately the image (below) I found on the 'Net doesn't show the new differentiable transfer function, but it has the same general shape as the SELU or ELU curves shown here: asymptotically horizontal at the low end, but growing without limit at the high end.

I think this new transfer function may draw inspiration from living neurons! Biological neurons usually have a minimum firing rate and even if they didn't they cannot go below zero firings per second. But the maximum firing rate is very high — several hundreds of firings per second are possible, iirc — much higher than typical neuron response.

627d12431fbd5e61913b7423_60be4975a399c635d06ea853_hero_image_activation_func_dark.png
Neurons with temporal traits are also a thing these days, HTMs with refractory periods and local suppression. Behavior in these domains also would have impacts insofar as "second guess", "additional context", "next step" and other such operations on the network. I'll be damned though if I've dug in very deeply on the learning algorithms applied to those values, though.
 
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
In a computer a given int is represented in a consistent way throughout the computer. I think in a neural network it isn't consistent and it can get mixed up about its exact value.
 
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
In a computer a given int is represented in a consistent way throughout the computer. I think in a neural network it isn't consistent and it can get mixed up about its exact value.
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
 
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?
 
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
That suggests a world where the binary number 7 is always consistent - but I thought there are virtual particles appearing and disappearing.... and text to image AI is often based on randomness. Do you think qubits are basically just bits? That suggests quantum computers can be simulated with a classical computer.
 
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.

But if you want to have a bigger system, it needs more neurons, but they also need to be arranged into nodes and parallelized correctly to achieve what you want, and unless you have made it capable of genetic evolution of some kind or spent a lot of time isolating a model for deeper cognition, more neurons is the tip of the iceberg.

You need the neural complexity to embed the semantic complexity of the request, but the neural complexity has to start out close to the arrangement it needs to embed the right semantic complexity.
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
That suggests a world where the binary number 7 is always consistent - but I thought there are virtual particles appearing and disappearing.... and text to image AI is often based on randomness. Do you think qubits are basically just bits?
Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
 
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.
It can do Pixar style faces:
IMG_20220613_164322.jpg

It can even do photorealistic faces:

But it will not be able to generate photorealistic faces for the public.

OpenAI's DALL-E was trained independently from DALL-E mini. Mini can generate photorealistic faces - even of famous people - though it is distorted if it can't handle the detail.
 
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.
It can do Pixar style faces:
IMG_20220613_164322.jpg

It can even do photorealistic faces:

But it will not be able to generate photorealistic faces for the public.

OpenAI's DALL-E was trained independently from DALL-E mini. Mini can generate photorealistic faces - even of famous people - though it is distorted if it can't handle the detail.
I've only ever seen horror shows from Mini, when approaching "realistic"

It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
 
Back
Top Bottom