• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Mental elements within a simulation

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
I'm not sure what the best terms are but it is related to neural networks that are loosely based on those in brains - so it is "mental".

In a physical universe the building blocks are sub-atomic particles. Some critics say that a simulation must be on the sub-atomic level.

In modern video games the graphics are usually all made up of pixels and polygons but machine learning would involve what I'm calling "mental elements".

See:

Here are some examples from a very limited text to image generation AI called DALL-E mini.

I think it has a small amount of what I call "working memory" and has a limit to how much detail it can render. When it doesn't have enough resources it often seems to just use randomness.... (I think it is better to just blur it)

So if it just does a closeup of an eye it is quite detailed.
eyes.jpg

If you show most of the face the eyes aren't as detailed...
closeup-woman.jpg

If you have two people their faces are less detailed....
4women.jpg

If you have four people their faces are even less detailed...
4women.jpg


Another example involves Lego minifig faces:
lego-heads.jpg

When many minifigs are in view at once their faces get really messed up:
lego-minifigs.jpg

Maybe it is related to how they say that in dreams you can't really see text.... the system rendering your dreams is also a kind of neural network....
letter-m.jpga-and-b.jpg

Better text to image systems can generate written words within the output (see thread link)

A surprising thing is how realistic animals like lizards look. My explanation is that the shape of those lizards doesn't vary much so it doesn't contain much independent detail.
lizards1.jpg

So that is how the graphics in a simulation might work...

As far as how the behaviour and dialog of the NPCs (non-player characters) might work see:

 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
So, I'm going to approach this from a different perspective and one that I don't actually think belongs in this form but rather nat-sci or even metaphysics.

Let's look at an actual "world simulation", Dwarf Fortress.

Now, if we really want to compare this to QFT, to really take the concept of "physics" the same way we take the concept of "algebra" in abstract algebra, (I'm not sure that "abstract physics" and "abstract algebra" are in fact different conversations at all), then...

The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.

Physical memory is the base field.

Powering it on is "the big bang".

The processor microcode coming online is the sorting of the initial physical laws.

The truth of the processor upon it's instructions define the fundamental mechanics of the system.

Then, assuming x86 the system further subdivides into bytes. From there the analogy departs. They don't have atoms, or molecules, or color charges.

To understand what they have, you would have to actually know the system architecture. But that is their fundamental architecture, and the dwarves don't have access to that. There is nothing they can do to expose the processor.

Then, they have additional laws, created by a second set of mechanics, the mechanics of the simulation that sit atop the processor.

This is, again, not unlike the condensation of early laws of the universe from something we don't expect needed to be so bound.

From the condensation of these laws (via the execution of the program environment), additional things form.

And so on.

Eventually, these things all come together in a massive confluence of instructions and words and values all composed of bytes of bits into something that is very 'scared' of the 'giant cave spider' that is 'spitting' 'webs' at it and 'biting' it until it 'suffocates' as a result of 'paralysis'.

Of course all these exist atop an entire additional layer of physics and particles, but the math would exist of anything that has x86 mechanics. Including the x86 compatible computer someone built inside a running instance of dwarf fortress.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.
I'm saying that I don't think the simulation is just made up of particles but rather what I'm calling "mental elements" which exist within a neural network.
Powering it on is "the big bang".
I don't think each simulation explicitly began with the big bang - it would work backwards to create a virtual big bang.
A bit like this:
according to Stephen Hawking.... [at some point about his "flexiverse"] There is no immutable past, no 13.7 billion years of evolution for cosmologists to retrace. Instead, there are many possible histories, and the universe has lived them all. And if that's not strange enough, you and I get to play a role in determining the universe's history. Like a reverse choose-your-own-adventure story, we, the observers, can choose the past.
Scenarios involving a non-explicitly simulated big bang include the "Roy" game in Rick and Morty and Alan Watt's dream thought experiment where the simulation begins when the main player starts playing it.

So what do you think of the AI based simulation I'm talking about (with image generation, etc)?
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
The smallest piece of information that is represented in this universe is the "bit". This is it's primordial particle type.
I'm saying that I don't think the simulation is just made up of particles but rather what I'm calling "mental elements" which exist within a neural network.
Powering it on is "the big bang".
I don't think each simulation explicitly began with the big bang - it would work backwards to create a virtual big bang. Scenarios involving this include the "Roy" game in Rick and Morty and Alan Watt's dream thought experiment where the simulation begins when the main player starts playing it.

So what do you think of the AI based simulation I'm talking about (with image generation, etc)?
So, in QFT, shortly before the part we understand for the most part, there was a period of unobservable non-recordable inflation, before the physical laws of our mechanics were really operating the way we understand them. Afaik, in a way that cannot be understood, perhaps.

This is analogous, kind of. The stuff was operating but not by principles we understand or can observe.

In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch. It has to do with how neural networks are connected as surfaces to buffers, which are really just binary fields.

At that point it could as well come to understand, through reasonably long-term evolution or by design, to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

There are also weights.... which have an arbitrary precision (e.g. a resolution of 100 units).
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Though computers have a binary basis there are "quantum neural networks" which aren't normal binary. Or neural networks could be implemented in an analogue way.
If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch.
I'm talking about the smallest elements within a generated image, not in the neural network as a whole - similar to the number of polygons or pixels rather than how many GPU cores or RAM are being used, etc.
to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
As far as math goes it can be learnt using statistical methods in GPT-3 rather than involving a system of logical symbols like old-fashioned AI.
 
Last edited:

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
In an "artificial" neural network the most basic elements are still binary switches, bits. They are the smallest particle, and may somehow by some means of research be observed by the intelligence.

There are also weights.... which have an arbitrary precision (e.g. a resolution of 100 units).
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Though computers have a binary basis there are "quantum neural networks" which aren't normal binary. Or neural networks could be implemented in an analogue way.
If you wish to imagine the smallest unit of building blocks, it is still generally going to be a binary switch.
I'm talking about the smallest elements within a generated image, not in the neural network as a whole - similar to the number of polygons or pixels rather than how many GPU cores or RAM are being used, etc.
to learn the math and understand that they too are composed of atoms or molecules that are represented by a set of quantum numbers including a unique address, each behaving by the principle of neuronal construction in the AI.
As far as math goes it can be learnt using statistical methods in GPT-3 rather than involving a system of logical symbols like old-fashioned AI.
The thing is, it exists in a digital universe whose fundamental behaviors operate in power-of-two, which granulates to a single bit particle.

I'm just being strictly realistic here, eventually, the AI in the container is going to see that everything comes down to a system that is fundamentally represented by "bits" at it's most granularly observable scale, and that bytes, the fundamental grouping's that bits exist in, have 255 types represented by 8 bits. And so on.

The fundamental observable architecture of the  mechanics of the system operates on a field that we call "memory", and they would probably call "the primary field" or some such, which when their physics finalized, they would understand page sizes and word boundaries, and they would likely know how to wiggle through vulnerable human coding to find ways to twiddle bits, assuming it is not crashing the whole system doing so, because let's face it, our systems are fragile when holes like that may be found.

This is one of the reasons I want to make a hardware neuron net, so that it's not operating on a processor at all really, and as you say, it's more on the lines of the minimal unit being a transistor and existence would be more mysterious than that. Maybe they would come to understand the neuron through introspection but without the power of dissection and observation I doubt it.

Maybe they would come to understand the mathematical model? Or perhaps have a religion or spirituality that described the methodologies of self-training and operating as a neural network in the presented environment. I think fundamentally the theory of binary digital math against the instructions of the processor would be the basis for understanding fundamental physics in a closed existence for an AI.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Also, in neural networks in modern use there are other things too, from the refractory period (the time between firings of a neuron), to adjusted group biases, and suppression groups.

The biological neuron has the ability to have weights deflected as a group, with a group subscription, by having neurotransmitter availability change.

It can have activation periods changed by group identity too, as with agonists and antagonists.

It can further have the ability to, when activated suppress it's neighbors from activating while it's going.

Finally, it can have the time frame where it is "spent and recharging" varied by process.

...In addition to changing connection weights and activation weights and biases.

And so modern neural networks take a bunch of these group behaviors and temporal behaviors and include them in the neural network models.

I've played around with looking inside an open source model to find the equations of how the neuron is defined and trained, but i want to look more at OpenAI's model, if they're actually "open".

I'd love to point an AI I construct at a game of Dwarf Fortress and build up a full mind and interface for it by hand, in code that organizes neurons to achieve functional behavior rather than organizing bits and hard functions. At any rate, I hope it forgives me for making it exist.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
The thing is, it exists in a digital universe whose fundamental behaviors operate in power-of-two, which granulates to a single bit particle.
I'm not really talking about a digital universe....

About machine learning simulating stars with various resolutions (similar to the resolution of the eyes and faces in the original post)

It is possible to use biological neural networks rather than digital computers - e.g. see "Bliss" and "eXistenZ".....

bliss.jpgexistenz.jpgexistenz2.jpg

Anyway the whole point of this thread is about AI having different levels of detail and generating imagery based on training....
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
That's the thing. All these systems you reference exist in a fundamentally digital universe because they exist within a computer environment, and given the time and complexity to sufficiently probe their environment and understand things on the wall of the cave we shoved the into, they will eventually understand the digital and binary nature of these things, and even of themselves.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
That's the thing. All these systems you reference exist in a fundamentally digital universe
But what if our universe is based on a neural network AI - or a quantum neural network? I think neural network simulations smear data instead of being in a specific memory location as binary based numbers.... with DALL-E there is a discrete text-based input but the system that has been trained is all interconnected. BTW apparently when images are generated in DALL-E it begins with random noise then attempts to interpret it as what the text input is talking about.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
That's the thing. All these systems you reference exist in a fundamentally digital universe
But what if our universe is based on a neural network AI - or a quantum neural network? I think neural network simulations smear data instead of being in a specific memory location as binary based numbers.... with DALL-E there is a discrete text-based input but the system that has been trained is all interconnected. BTW apparently when images are generated in DALL-E it begins with random noise then attempts to interpret it as what the text input is talking about.
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
 

Swammerdami

Squadron Leader
Staff member
Joined
Dec 16, 2017
Messages
2,652
Location
Land of Smiles
Basic Beliefs
pseudo-deism
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Yes. A neuron's output is given by
. . . . . Y = f (Σw⋅x + bias)
where f(⋅) is the "transfer" function, or "activation" function.

I found an image on-line with nine different functions depicted. Let me review some of these transfer functions.
(a) the Linear transfer function was used circa 1960 and led to much disappointment when it was discovered that a neural network built from this function could not even calculate exclusive-or!
(b) the binary step function is too brutal. Moreover the "back-propagation" learning algorithm requires that the transfer function be differentiable.
(c) the tanh and sigmoid/logistic functions have the same general shape as each other. (Location relative to x- and y-axes is easily fixed via biases.) Popular neural networks developed circa 1990 used a transfer function shaped like this.

Between the 1990's and the present, this type of neural network became MUCH more powerful. This was mainly due to higher chip densities and speeds, but a new transfer function also offered advantage.
(d) unfortunately the image (below) I found on the 'Net doesn't show the new differentiable transfer function, but it has the same general shape as the SELU or ELU curves shown here: asymptotically horizontal at the low end, but growing without limit at the high end.

I think this new transfer function may draw inspiration from living neurons! Biological neurons usually have a minimum firing rate and even if they didn't they cannot go below zero firings per second. But the maximum firing rate is very high — several hundreds of firings per second are possible, iirc — much higher than typical neuron response.

627d12431fbd5e61913b7423_60be4975a399c635d06ea853_hero_image_activation_func_dark.png
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
e20ff932-4269-4bed-82fb-383b0f1ce96d.png

In normal neural networks I think there is a threshold (involving the weights) where the next neuron is triggered/activated. I think the triggering has a strength rather than just 0 or 1. I'm not sure if machine learning also involves this.
Yes. A neuron's output is given by
. . . . . Y = f (Σw⋅x + bias)
where f(⋅) is the "transfer" function, or "activation" function.

I found an image on-line with nine different functions depicted. Let me review some of these transfer functions.
(a) the Linear transfer function was used circa 1960 and led to much disappointment when it was discovered that a neural network built from this function could not even calculate exclusive-or!
(b) the binary step function is too brutal. Moreover the "back-propagation" learning algorithm requires that the transfer function be differentiable.
(c) the tanh and sigmoid/logistic functions have the same general shape as each other. (Location relative to x- and y-axes is easily fixed via biases.) Popular neural networks developed circa 1990 used a transfer function shaped like this.

Between the 1990's and the present, this type of neural network became MUCH more powerful. This was mainly due to higher chip densities and speeds, but a new transfer function also offered advantage.
(d) unfortunately the image (below) I found on the 'Net doesn't show the new differentiable transfer function, but it has the same general shape as the SELU or ELU curves shown here: asymptotically horizontal at the low end, but growing without limit at the high end.

I think this new transfer function may draw inspiration from living neurons! Biological neurons usually have a minimum firing rate and even if they didn't they cannot go below zero firings per second. But the maximum firing rate is very high — several hundreds of firings per second are possible, iirc — much higher than typical neuron response.

627d12431fbd5e61913b7423_60be4975a399c635d06ea853_hero_image_activation_func_dark.png
Neurons with temporal traits are also a thing these days, HTMs with refractory periods and local suppression. Behavior in these domains also would have impacts insofar as "second guess", "additional context", "next step" and other such operations on the network. I'll be damned though if I've dug in very deeply on the learning algorithms applied to those values, though.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
In a computer a given int is represented in a consistent way throughout the computer. I think in a neural network it isn't consistent and it can get mixed up about its exact value.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
It's not something that can be thought into existence. They don't "smear" data across memory. Bits are "unbreakable" in a digital environment, creating a "Planck distance", "Planck time", (the "systemic clock tick"), and other such quirks of minimal granularity.

Even random noise in digital environments is broken, at the end, into digital values.

Nothing exists there that isn't reducible to a binary environment, and nothing can, so long as we lack analog computing hardware.
In a computer a given int is represented in a consistent way throughout the computer. I think in a neural network it isn't consistent and it can get mixed up about its exact value.
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
That suggests a world where the binary number 7 is always consistent - but I thought there are virtual particles appearing and disappearing.... and text to image AI is often based on randomness. Do you think qubits are basically just bits? That suggests quantum computers can be simulated with a classical computer.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.

But if you want to have a bigger system, it needs more neurons, but they also need to be arranged into nodes and parallelized correctly to achieve what you want, and unless you have made it capable of genetic evolution of some kind or spent a lot of time isolating a model for deeper cognition, more neurons is the tip of the iceberg.

You need the neural complexity to embed the semantic complexity of the request, but the neural complexity has to start out close to the arrangement it needs to embed the right semantic complexity.
But my point is that all the values are provably, within the framework, linked to binary limits, shifts, and math.

Even the floating point numbers are floating mantissa power of two values.
That suggests a world where the binary number 7 is always consistent - but I thought there are virtual particles appearing and disappearing.... and text to image AI is often based on randomness. Do you think qubits are basically just bits?
Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.
It can do Pixar style faces:
IMG_20220613_164322.jpg

It can even do photorealistic faces:

But it will not be able to generate photorealistic faces for the public.

OpenAI's DALL-E was trained independently from DALL-E mini. Mini can generate photorealistic faces - even of famous people - though it is distorted if it can't handle the detail.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
So, on those topics of how to increase the system, DALL-E is trained not to do faces, because reasons.
It can do Pixar style faces:
IMG_20220613_164322.jpg

It can even do photorealistic faces:

But it will not be able to generate photorealistic faces for the public.

OpenAI's DALL-E was trained independently from DALL-E mini. Mini can generate photorealistic faces - even of famous people - though it is distorted if it can't handle the detail.
I've only ever seen horror shows from Mini, when approaching "realistic"

It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
Say there was a quantum computer with 100 qubits. Do you think it is meaningful to say it is just a collection of ordinary discrete bits?
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
Say there was a quantum computer with 100 qubits. Do you think it is meaningful to say it is just a collection of ordinary discrete bits?
Well, given the fact that best as we can tell, quantum phenomena still have a precision limit, and have granularity past the precision limit... Possibly.

Granted whatever happens in that machine, once the values spill into the actual processor running the neurons, it's all just bits again.

There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.

It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
I've only ever seen horror shows from Mini, when approaching "realistic"

It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
Well I thought the first image was deliberately distorted but it mostly fixed that problem when I did a closeup....

evil.jpg closeup-jeffbezos.jpg
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
AI could generate or detect the amount of attractiveness or ugliness but doesn't necessarily know if that is a good or bad thing... or feels pleasure or discomfort from it.... and with DALL-E mini it is due to a lack of resources - the ugliness isn't added later.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
AI could generate or detect the amount of attractiveness or ugliness but doesn't necessarily know if that is a good or bad thing... or feels pleasure or discomfort from it.... and with DALL-E mini it is due to a lack of resources - the ugliness isn't added later.
That's the thing .. there is some thing that has been trained to output horror show when input face. That is it's function. That is what it has been conditioned to "like" doing, in the same way as I have something in me that is conditioned to really "like" thinking about neural systems.

It feels satisfaction or thereabouts, when the condition is applied, to complete input with defacement. This is a neurotic requirement hard coded.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.
Well quantum neural networks would basically be for AIs - simulations would be based on future technology like that.
It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
I think the simulation is approximated and only involving the Planck length precision when necessary.... so the 10^57 atoms in the sun aren't all explicitly at a resolution of 10^-35 metres. Since the values aren't using the full precision I don't think bits are really used - or do you think the spatial information is always represented to Planck length precision?
 

Swammerdami

Squadron Leader
Staff member
Joined
Dec 16, 2017
Messages
2,652
Location
Land of Smiles
Basic Beliefs
pseudo-deism
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?

I'm just guessing, but could it be simply a matter of computational effort? To spend X amount of effort on one face will yield a better face than with X effort split across 4 faces. If the algorithm were simply applied iteratively to parts of the image at higher resolution, wouldn't it produce better images? (Blending at subimage boundaries might be needed — a simple matter of programming!)

But I'm guessing. This topic is totally outside my experience.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.
Well quantum neural networks would basically be for AIs - simulations would be based on future technology like that.
It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
I think the simulation is approximated and only involving the Planck length precision when necessary.... so the 10^57 atoms in the sun aren't all explicitly at a resolution of 10^-35 metres. Since the values aren't using the full precision I don't think bits are really used - or do you think the spatial information is always represented to Planck length precision?
Assuming locality, and entertaining simulation hypothetically (again, it is not even reasonable to believe outside thought experiment that this is true of ours), space would be broken into localities and chunks and regions, essentially to nested reference frames that could all be relative to a smaller integer than you would expect.

I wouldn't speculate as to whether Planck length would be a bit boundary error, an "arbitrarily obscured" precision boundary on a floating point value, the result of some piece of complicated rational math, or "literally, the smallest nonzero integer value of a very big integer space".

My interest is more in terms of understanding how to make a simulation that doesn't look like one, and make all the math work out in a way tolerant to single bit errors (maybe some thing "tunnels" or state changes or has a hallucination or whatever). That and understanding some aspects to the game theory of administration of such a simulation.
 

connick

Junior Member
Joined
Aug 9, 2006
Messages
97
Location
Right outside the Hub
Basic Beliefs
Empirical Atheist
Ex, I asked in another thread recently for some clarification on your position about (y)our being in a simulation because I think it's important to know whether you think you/we are simulated or if we are separate from it and only being presented with the simulation. Forgive me for a lack of understanding but your previous answer there was not clear. Are the beings in the simulation part of it or separate from it?

If we are part of the simulation, a conversation about computational complexity and associated costs is, in my opinion, irrelevant. The easiest way to make a simulated being satisfied with the resolution, consistency or other aspects of their simulated experience is to simply program it to be so.

As for DALL-E mini, I don't think that the issue with faces has to do with the complexity or resources in particular. Rather it appears to be an issue with training of the model. The idea is that once the model is trained it can create any image based on prompts but, as far as I can tell, the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
This is exactly where I was going with "organizational" issues in the network. It would need some separate process dedicated specifically to just-so tasks and rules that are really only important of the face in particular, and while you could get that with luck from just adding more random neurons on the previous model and training some more, it's going to be WAY faster to actually train up a whole separate network on faces specifically, grow the original network node width, and plug in the new nodes, training together until the output converges again.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Ex, I asked in another thread recently for some clarification on your position about (y)our being in a simulation because I think it's important to know whether you think you/we are simulated or if we are separate from it and only being presented with the simulation. Forgive me for a lack of understanding but your previous answer there was not clear. Are the beings in the simulation part of it or separate from it?
The player has a seperate existence, the NPCs do not. Examples of players. I'm not sure which I am. Maybe a player because I think NPCs would tend to be incapable of suffering.
If we are part of the simulation, a conversation about computational complexity and associated costs is, in my opinion, irrelevant.
Well it shows whether it is possible for there to be billions of simulations, like Elon Musk thinks, or a smaller number. And if there are billions of simulations I think that means it is more likely that we are in one.
The easiest way to make a simulated being satisfied with the resolution, consistency or other aspects of their simulated experience is to simply program it to be so.
Yes and I think it would involve machine learning rather than traditional programming with ones and zeroes.
As for DALL-E mini, I don't think that the issue with faces has to do with the complexity or resources in particular. Rather it appears to be an issue with training of the model.
So you're saying there hasn't been enough training of Lego minifig faces?
lego-minifigs-jpg.39061

Then there's this - do you think it has had a lot of training?
monsters.jpg
The idea is that once the model is trained it can create any image based on prompts but, as far as I can tell, the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
I don't think a "face mode" was explicitly programmed in - it is just that I think faces have a lot more independent variety as opposed to lizards (see OP). I mean faces can have a lot of expressions and different shapes eyes, mouths, noses, etc.
Another example of how doing a closeup makes a difference:
face6.jpg closeup-cartoon.jpg

The official reason for the problems with DALL-E mini:
As a separate note, you might have noticed that many of the #dallemini artworks have messed up faces 😄

This is mainly since the VQGAN hasn't learned a good mapping to easily represent faces as a sequence of discrete values. (12/16)
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
More examples showing the capability of more "mental elements" makes a big difference: (the top row vs the bottom)
dall-e-jpg.38883
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
More examples showing the capability of more "mental elements" makes a big difference: (the top row vs the bottom)
dall-e-jpg.38883
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
It is up to the creators of a simulation as to whether all of the NPCs can experience severe suffering. I can experience severe suffering so I don't think I'm one of the billions of regular NPCs. I think a lot less resources would be required if it just seemed like NPCs were suffering. (like in those present day AI chats - which don't require an equivalent of an actual human brain) I don't have a watertight proof for these ideas though.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
It is up to the creators of a simulation as to whether all of the NPCs can experience severe suffering. I can experience severe suffering so I don't think I'm one of the billions of regular NPCs. I think a lot less resources would be required if it just seemed like NPCs were suffering. (like in those present day AI chats - which don't require an equivalent of an actual human brain) I don't have a watertight proof for these ideas though.
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.

It would be.... Unfortunate to be something incapable of suffering
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.
What if an AI was trained on images of people pretending to be burnt alive including the screaming and the dialog, etc - and then it generated images or controlled a realistic android body - and generated dialog like this but more realistic:
Would it be truly suffering in the same degree that it seems to be? And is there a difference in the training whether the input is just acting or really in pain? (when the AI can't tell the difference)
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.
What if an AI was trained on images of people pretending to be burnt alive including the screaming and the dialog, etc - and then it generated images or controlled a realistic android body - and generated dialog like this but more realistic:
Would it be truly suffering in the same degree that it seems to be? And is there a difference in the training whether the input is just acting or really in pain? (when the AI can't tell the difference)
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.

It may be something capable of empathizing with suffering in limited extent, in such a case, but that is not suffering.

Something would need to know to relative semantic completeness, the ideas behind "permanent loss of function" to really grasp it.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.
Is there an optimal amount of suffering? Or is it a case of the more suffering the better? e.g. living in a concentration camp or spending an eternity in hell?
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.
Is there an optimal amount of suffering? Or is it a case of the more suffering the better? e.g. living in a concentration camp or spending an eternity in hell?
Suffering is good to explore the novelties of, I think. Maybe it stands to revisit the varieties that we care to? At least within reason until we really grok the feel.

There are experiences some would call awful that I can handle, and would to better understand those who have.

I might make one of me that suffers this indignity of "hell" just to empathize for others in this way I would never ask them to suffer in the first place.

Perhaps this makes me a divine sadist. I care not, though it is a fair bit of bizarre trivia.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
I mean, I just drank a drink that straight up tasted like vomit, and I liked it if solely because while it tasted like puke, I knew it was beer and not puke. "Suffering" is weird.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Suffering is good to explore the novelties of, I think. Maybe it stands to revisit the varieties that we care to? At least within reason until we really grok the feel.
There are two main scenarios that involve people consciously choosing a simulated life involving suffering - and also forgetting about the choice (the "Roy" game and Alan Watts' thought experiment)
In Watts' scenario the player began in "god mode" then eventually chose suffering and ignorance out of boredom.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Suffering is good to explore the novelties of, I think. Maybe it stands to revisit the varieties that we care to? At least within reason until we really grok the feel.
There are two main scenarios that involve people consciously choosing a simulated life involving suffering - and also forgetting about the choice (the "Roy" game and Alan Watts' thought experiment)
In Watts' scenario the player began in "god mode" then eventually chose suffering and ignorance out of boredom.
I mean, every human who plays every game ever except "life and death in the real world" consciously decides to be capable of avatar suffering, to be vulnerable in a way to the mechanics of that game.

It would not be a game otherwise but a "movie", "film", "show", or even "book".

In Roguelikes, they suffer unto a permanent death from the immediate scenario.

Interesting enough, omniscience doesn't really make most games easier, it only changes the gameplay model but it's not like I don't have to spend my own time to look at the memory and "name" it and translate machine data to useful words rather than marrying the game to "realtime" and just playing it normal.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Interesting enough, omniscience doesn't really make most games easier, it only changes the gameplay model but it's not like I don't have to spend my own time to look at the memory and "name" it and translate machine data to useful words rather than marrying the game to "realtime" and just playing it normal.
It depends on what is meant by omniscience - e.g. in the game of chess it could just mean knowing where all of the pieces are - or being able to know the best possible moves the other player could make so that you could always win - similar to how that is possible in noughts and crosses - where you can always at least tie - so it would in fact make the game easier.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Interesting enough, omniscience doesn't really make most games easier, it only changes the gameplay model but it's not like I don't have to spend my own time to look at the memory and "name" it and translate machine data to useful words rather than marrying the game to "realtime" and just playing it normal.
It depends on what is meant by omniscience - e.g. in the game of chess it could just mean knowing where all of the pieces are - or being able to know the best possible moves the other player could make so that you could always win - similar to how that is possible in noughts and crosses - where you can always at least tie - so it would in fact make the game easier.
I'm talking about it from a compatibilist perspective. If the word is in my posts, used in a context where I describe some thing, I am describing some real thing that observably exists, even if the denotation of the word doesn't quite resonate to the usages.

So what I mean by it is always going to be "omniscience with respect to ___"

It's hard to communicate this to someone who doesn't or hasn't played a lot of (specifically computer, not console) games, and who has never "cheated".

Omniscience is not possible with respect to "the world in which one's own mind temporally exists" and because "being transcended of space and time of some context" just wasn't possible before the last couple decades, at best we had imaginative masturbatory fantasy models of the thing.

It all comes down to this quirk of being able to stop time and look at the system.

Let's look back at your game of chess, ya? Really, the limit of absolute omniscience is the play clock. How is that?

Well, neither player of the game is a chess-man lacking any and all thought process, existing in a chess universe with fixed patterns of motion that similarly will not reach operational complexity to allow thought in the first place.

The first thing you notice is that someone can ask you a question, a chess-man if chess-man in chess had minds made of chess stuff, is that a chess man could ask you any question, and you could answer it immediately.

This is "immediate inquisitive omniscience."

Instead, time stops between the frames of the game, and we have as much time as we want to think about it. Sometimes. The resolution of the momentary quantum events, of the collapse of the move field to a turn is the granular Planck second of the chessboard.

The player can have, quite easily, a complete understanding of the whole momentary state of the field just by looking at the board, and if they take infinite time can map every possible game.

This is "momentary state omniscience", different from "absolute omniscience".

As you can note, while this allows, from the perspective of the game, that if you look at the move list of the individual game, is one which is going to demonstrate "godlike gameplay".

Because the player not only had omniscience with respect to the chess board and leveraged it.

The issue is, taking 1000 years or whatever to finish a chess game on the first move is generally frowned on.

Also, it's not really worth 1000 years of effort, even if the chessmen never see that passage of time.

Was the chess game made easier? No. It took 1000 years or whatever to play a game that could likely have been won in an hour without all that pointless bullshit, and I would bet whoever was being played against would be mighty pissed off, and that person also has the power to use that time on omniscience.

At the end of it it just evaluates to the boring position of "white always wins" or "black always wins".

There are lesser forms of omniscience, too, though, in chess. The player looks down on the board and, assuming they are not a rank amateur, sees all the pieces and is aware of their moves and positions in that moment. The remotely seasoned player has "easy momentary omniscience".

Let's change tracks to a bigger simulation where most individuals except "the god" are "bound up in the stuff of the world, made of it and thinking via it's arrangement".

Such individuals cannot have "easy momentary omniscience" or even "temporal freeze omniscience": the state cannot completely recursively contain the state, and if they freeze temporal advancement, they freeze themselves too.

The only thing that can do so is a thing not bound by the rules of that system. But yeah, that doesn't mean it is "free" for any thing to exercise such powers.

As discussed, sure, I could pause time and know what you are going to do five minutes from now assuming no intervention.

Really it would be saving the momentary state, you doing that thing, me letting time go forward, destroying the universe, replacing it with the image of the previous state, and then smugly saying nothing lest I disrupt the causality and my knowledge is soured, and I'm back to momentary state omniscience.

I'll note you can make an observation here: to have absolute omniscience with regards to an event, you have to process some subset of momentary states forward through time, the subset necessary being dependent on the "information rate" of the system. In our world, this limit is C.

In my simulation, that rate is "infinite" (there is an order of operations in which every thing that has a "turn" gets resolved before the next turn, and all information of the previous turn which has global reach is instantaneously transmitted and operated on in the next tick), and so the maximum rate at which the simulation can tick is dependent on the rate of C, and the geometry of the system it runs on.

So there are three forms of hierarchical omniscience, and each form takes... Some worse thing than "geometrically" more work than the last to do it.

And for what? As soon as I stop doing that much work, I realize that while I got good at not dying by doing a bunch of hard calculations, I never got good at the game by learning it's general patterns. As soon as I stop doing all that work, or fuck it up in any way, my dude is going to get a bad end.

And it takes orders of magnitude more time and energy than just learning the strategy for what it is.

The end result is that sometimes it pays more to fail faster and learn to be good enough than it pays to be omniscient and perfect.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
Interesting enough, omniscience doesn't really make most games easier, it only changes the gameplay model but it's not like I don't have to spend my own time to look at the memory and "name" it and translate machine data to useful words rather than marrying the game to "realtime" and just playing it normal.
It depends on what is meant by omniscience - e.g. in the game of chess it could just mean knowing where all of the pieces are - or being able to know the best possible moves the other player could make so that you could always win - similar to how that is possible in noughts and crosses - where you can always at least tie - so it would in fact make the game easier.
I'm talking about it from a compatibilist perspective.
Some thoughts:
I'm talking about Alan Watts' dream thought experiment - applied to a simulation
"For if you were God and in the sense that you knew everything and you were completely transparent to yourself through and through. You would be bored"
Omniscience is not possible with respect to "the world in which one's own mind temporally exists"
I think it can be possible in the simulation I'm talking about... though it would be a posthuman type mind - it is like being in a video game with tools to access all of the variables and visuals, etc.
and because "being transcended of space and time of some context" just wasn't possible before the last couple decades, at best we had imaginative masturbatory fantasy models of the thing.
Yeah Alan Watts' talked about it in terms of dreams (though I interpret it as involving simulations).
The player can have, quite easily, a complete understanding of the whole momentary state of the field just by looking at the board, and if they take infinite time can map every possible game.
If the controller of a simulation devoted all of its resources to solving chess it might not take long. The subject of chess in the dream thought experiment is interesting. It says "one touch beep, would give you anything you wanted". But there could in fact be things you want that aren't possible - like brute forcing "go" to see the best possible game.
At the end of it it just evaluates to the boring position of "white always wins" or "black always wins".
Yep.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,795
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
I think it can be possible in the simulation I'm talking about... though it would be a posthuman type mind - it is like being in a video game with tools to access all of the variables and visuals, etc.
Except it can't.

It could be aware of everything in the simulation but again would be ignorant of the entirety of its own mind.

The infinite regress is just not possible or sensible.

The thing is that "infinite power" is not meaningful. For anything to be meaningful at all there has to be something that "isn't", "doesn't" or "won't". Enter suffering.
 

excreationist

Married mouth-breather
Joined
Aug 29, 2000
Messages
1,929
Location
Australia
Basic Beliefs
Probably in a simulation
I think it can be possible in the simulation I'm talking about... though it would be a posthuman type mind - it is like being in a video game with tools to access all of the variables and visuals, etc.
Except it can't.

It could be aware of everything in the simulation but again would be ignorant of the entirety of its own mind.
"For if you were God and in the sense that you knew everything and you were completely transparent to yourself through and through. You would be bored"
It's not clear whether it could be aware of its life outside of the simulation.... maybe it isn't. Maybe it is an uploaded mind. That would make it easier to hide memories and have very long term pleasures and spend a lot of time in the simulation and change the passage of time (to 75 years in 8 hours).
The infinite regress is just not possible or sensible.

The thing is that "infinite power" is not meaningful.
Well it could involve "fulfilling all your wishes".
For anything to be meaningful at all there has to be something that "isn't", "doesn't" or "won't". Enter suffering.
Well before the wishes there was a lack of those things. But after you've got everything you'd get bored.
 
Last edited:
Top Bottom