• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Mental elements within a simulation

Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
Say there was a quantum computer with 100 qubits. Do you think it is meaningful to say it is just a collection of ordinary discrete bits?
 
Unless the result is being left solely on quantum hardware, or is random seeming, or hard to comprehend, once it hits the digital bottleneck of the neural model, it's all bits.

They're going to shake out in the rounding errors at the precision limit of the floating point, and be visible as clips of rationals into a limited binary precision.

Their universe will only be so "rational", at which point all the real values are power of two.

You would need to invent a rational or even a "process number" type to make it unable to see the "bits" in the behavior.
Say there was a quantum computer with 100 qubits. Do you think it is meaningful to say it is just a collection of ordinary discrete bits?
Well, given the fact that best as we can tell, quantum phenomena still have a precision limit, and have granularity past the precision limit... Possibly.

Granted whatever happens in that machine, once the values spill into the actual processor running the neurons, it's all just bits again.

There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.

It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
 
I've only ever seen horror shows from Mini, when approaching "realistic"

It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
Well I thought the first image was deliberately distorted but it mostly fixed that problem when I did a closeup....

evil.jpg closeup-jeffbezos.jpg
 
It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
AI could generate or detect the amount of attractiveness or ugliness but doesn't necessarily know if that is a good or bad thing... or feels pleasure or discomfort from it.... and with DALL-E mini it is due to a lack of resources - the ugliness isn't added later.
 
It's good to know the faces are being defaced after the fact though... It would be sad if an AI thought the ideal human face was "horror show"
AI could generate or detect the amount of attractiveness or ugliness but doesn't necessarily know if that is a good or bad thing... or feels pleasure or discomfort from it.... and with DALL-E mini it is due to a lack of resources - the ugliness isn't added later.
That's the thing .. there is some thing that has been trained to output horror show when input face. That is it's function. That is what it has been conditioned to "like" doing, in the same way as I have something in me that is conditioned to really "like" thinking about neural systems.

It feels satisfaction or thereabouts, when the condition is applied, to complete input with defacement. This is a neurotic requirement hard coded.
 
There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.
Well quantum neural networks would basically be for AIs - simulations would be based on future technology like that.
It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
I think the simulation is approximated and only involving the Planck length precision when necessary.... so the 10^57 atoms in the sun aren't all explicitly at a resolution of 10^-35 metres. Since the values aren't using the full precision I don't think bits are really used - or do you think the spatial information is always represented to Planck length precision?
 
@Swammerdami

I was wondering if you could help me get insight into the topic of the text to image AI in the original post. (you can click to see the full images)

So there seems to be a consistent level of overall detail and DALL-E mini is very limited. What would it take to get greater detail e.g. having a realistic face and eyes when there are four women in the image? Would the entire neural network need more input, hidden and output neurons? Would it then need to be trained more?

I'm just guessing, but could it be simply a matter of computational effort? To spend X amount of effort on one face will yield a better face than with X effort split across 4 faces. If the algorithm were simply applied iteratively to parts of the image at higher resolution, wouldn't it produce better images? (Blending at subimage boundaries might be needed — a simple matter of programming!)

But I'm guessing. This topic is totally outside my experience.
 
There's a digital bottleneck and frankly, modern AIs of the kind we are discussing don't have qubits in their hardware in the first place AFAIK.
Well quantum neural networks would basically be for AIs - simulations would be based on future technology like that.
It depends a lot on whether we can isolate a quantum structure that has no limit of granularity.
I think the simulation is approximated and only involving the Planck length precision when necessary.... so the 10^57 atoms in the sun aren't all explicitly at a resolution of 10^-35 metres. Since the values aren't using the full precision I don't think bits are really used - or do you think the spatial information is always represented to Planck length precision?
Assuming locality, and entertaining simulation hypothetically (again, it is not even reasonable to believe outside thought experiment that this is true of ours), space would be broken into localities and chunks and regions, essentially to nested reference frames that could all be relative to a smaller integer than you would expect.

I wouldn't speculate as to whether Planck length would be a bit boundary error, an "arbitrarily obscured" precision boundary on a floating point value, the result of some piece of complicated rational math, or "literally, the smallest nonzero integer value of a very big integer space".

My interest is more in terms of understanding how to make a simulation that doesn't look like one, and make all the math work out in a way tolerant to single bit errors (maybe some thing "tunnels" or state changes or has a hallucination or whatever). That and understanding some aspects to the game theory of administration of such a simulation.
 
Ex, I asked in another thread recently for some clarification on your position about (y)our being in a simulation because I think it's important to know whether you think you/we are simulated or if we are separate from it and only being presented with the simulation. Forgive me for a lack of understanding but your previous answer there was not clear. Are the beings in the simulation part of it or separate from it?

If we are part of the simulation, a conversation about computational complexity and associated costs is, in my opinion, irrelevant. The easiest way to make a simulated being satisfied with the resolution, consistency or other aspects of their simulated experience is to simply program it to be so.

As for DALL-E mini, I don't think that the issue with faces has to do with the complexity or resources in particular. Rather it appears to be an issue with training of the model. The idea is that once the model is trained it can create any image based on prompts but, as far as I can tell, the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
 
the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
This is exactly where I was going with "organizational" issues in the network. It would need some separate process dedicated specifically to just-so tasks and rules that are really only important of the face in particular, and while you could get that with luck from just adding more random neurons on the previous model and training some more, it's going to be WAY faster to actually train up a whole separate network on faces specifically, grow the original network node width, and plug in the new nodes, training together until the output converges again.
 
Ex, I asked in another thread recently for some clarification on your position about (y)our being in a simulation because I think it's important to know whether you think you/we are simulated or if we are separate from it and only being presented with the simulation. Forgive me for a lack of understanding but your previous answer there was not clear. Are the beings in the simulation part of it or separate from it?
The player has a seperate existence, the NPCs do not. Examples of players. I'm not sure which I am. Maybe a player because I think NPCs would tend to be incapable of suffering.
If we are part of the simulation, a conversation about computational complexity and associated costs is, in my opinion, irrelevant.
Well it shows whether it is possible for there to be billions of simulations, like Elon Musk thinks, or a smaller number. And if there are billions of simulations I think that means it is more likely that we are in one.
The easiest way to make a simulated being satisfied with the resolution, consistency or other aspects of their simulated experience is to simply program it to be so.
Yes and I think it would involve machine learning rather than traditional programming with ones and zeroes.
As for DALL-E mini, I don't think that the issue with faces has to do with the complexity or resources in particular. Rather it appears to be an issue with training of the model.
So you're saying there hasn't been enough training of Lego minifig faces?
lego-minifigs-jpg.39061

Then there's this - do you think it has had a lot of training?
monsters.jpg
The idea is that once the model is trained it can create any image based on prompts but, as far as I can tell, the model is not entering a more computationally taxing "face mode" when rendering faces and using something simpler for other regions of an image.
I don't think a "face mode" was explicitly programmed in - it is just that I think faces have a lot more independent variety as opposed to lizards (see OP). I mean faces can have a lot of expressions and different shapes eyes, mouths, noses, etc.
Another example of how doing a closeup makes a difference:
face6.jpg closeup-cartoon.jpg

The official reason for the problems with DALL-E mini:
As a separate note, you might have noticed that many of the #dallemini artworks have messed up faces 😄

This is mainly since the VQGAN hasn't learned a good mapping to easily represent faces as a sequence of discrete values. (12/16)
 
More examples showing the capability of more "mental elements" makes a big difference: (the top row vs the bottom)
dall-e-jpg.38883
 
More examples showing the capability of more "mental elements" makes a big difference: (the top row vs the bottom)
dall-e-jpg.38883
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
 
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
It is up to the creators of a simulation as to whether all of the NPCs can experience severe suffering. I can experience severe suffering so I don't think I'm one of the billions of regular NPCs. I think a lot less resources would be required if it just seemed like NPCs were suffering. (like in those present day AI chats - which don't require an equivalent of an actual human brain) I don't have a watertight proof for these ideas though.
 
Why would you assume you need to be more than a thing of the system to have a meaningful existence?

You become a player, in many respects, by the fundamental awakening to your own agency and power to make decisions at all, by the simple, practiced act of imagining "what if?" Before you decide whether.
It is up to the creators of a simulation as to whether all of the NPCs can experience severe suffering. I can experience severe suffering so I don't think I'm one of the billions of regular NPCs. I think a lot less resources would be required if it just seemed like NPCs were suffering. (like in those present day AI chats - which don't require an equivalent of an actual human brain) I don't have a watertight proof for these ideas though.
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.

It would be.... Unfortunate to be something incapable of suffering
 
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.
What if an AI was trained on images of people pretending to be burnt alive including the screaming and the dialog, etc - and then it generated images or controlled a realistic android body - and generated dialog like this but more realistic:
Would it be truly suffering in the same degree that it seems to be? And is there a difference in the training whether the input is just acting or really in pain? (when the AI can't tell the difference)
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
 
That's bullshit. Feeling suffering is a function of what you are not where you are, as is the capacity for it.
What if an AI was trained on images of people pretending to be burnt alive including the screaming and the dialog, etc - and then it generated images or controlled a realistic android body - and generated dialog like this but more realistic:
Would it be truly suffering in the same degree that it seems to be? And is there a difference in the training whether the input is just acting or really in pain? (when the AI can't tell the difference)
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.

It may be something capable of empathizing with suffering in limited extent, in such a case, but that is not suffering.

Something would need to know to relative semantic completeness, the ideas behind "permanent loss of function" to really grasp it.
 
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.
Is there an optimal amount of suffering? Or is it a case of the more suffering the better? e.g. living in a concentration camp or spending an eternity in hell?
 
It would be.... Unfortunate to be something incapable of suffering
I don't think Christians would be wishing that Heaven included suffering....
Then Christians are fucking stupid! Life is pain! The exquisite pain of knowing that one will hurt for what one is getting is .. well... Heaven on earth.
Is there an optimal amount of suffering? Or is it a case of the more suffering the better? e.g. living in a concentration camp or spending an eternity in hell?
Suffering is good to explore the novelties of, I think. Maybe it stands to revisit the varieties that we care to? At least within reason until we really grok the feel.

There are experiences some would call awful that I can handle, and would to better understand those who have.

I might make one of me that suffers this indignity of "hell" just to empathize for others in this way I would never ask them to suffer in the first place.

Perhaps this makes me a divine sadist. I care not, though it is a fair bit of bizarre trivia.
 
I mean, I just drank a drink that straight up tasted like vomit, and I liked it if solely because while it tasted like puke, I knew it was beer and not puke. "Suffering" is weird.
 
Back
Top Bottom