• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Simulations/matrix and the speed of light

Also on the topic of our world being a computer game -

There is the concept of pseudo-random numbers and "seeds"... like in Minecraft - you can deterministically recreate the entire world based on a single "seed".

A similar process is in No Man's Sky where any of the 18 quintillion planets can be deterministically recreated based on the planet's number....
 
Pseudo random numbers aren't actually random.
It is a way of doing Groundhog Day scenarios that exactly repeat. Repeating time/seed might be the only way of determining if the randomness is pseudo-random or truly random.... (BTW the same happens in Doom - when you repeat a recording the random number generator repeats the same numbers)

re: # of planets NSRR.

Ah, but I twaddle ....
What does NSRR mean?
 
Generation gap going the other way. "No S**T Red Ryder"

My way of achieving independent random sequences from a good pseudo random number generator on a DEC machine in the 70s was was to use variable decay from an elevator start time signal seed to signal sequence start. Likelihood of identical elevator start signals are usually pretty near zero.

Outcome is produced with a certified number table being read before request is made to open a gate for the next local elevator start signal time which sets where the number sequence becomes fixed for x trials.

Others used tubes in radios on a randomly selected station or some other unlikely signal as source key for determining number of a running PRN generator.

Otherwise Pseudo random programs whether they be tables or calculations are useless to scientific enquiry.

Just rambling on down memory lane.
 
Generation gap going the other way. "No S**T Red Ryder"
I like "No sh*t Sherlock"

....Otherwise Pseudo random programs whether they be tables or calculations are useless to scientific enquiry.
I think a single seed (or multiple?) to generate huge procedural environments (e.g. a galaxy or geology) is a useful idea in terms of optimising video games....

On the topic of randomness, these random faces were completely generated by AI - I think it starts off with a low resolution then progressively adds detail....
https://www.technologyreview.com/20...ces-show-how-algorithms-can-now-mess-with-us/
screen-shot-2018-12-14-at-10.13.53-am-9.png
 
And then I pondered telescopes. I'm in the market for telescopes and I'm picking up on the varying optics science and what not. The topic? Eye pieces and its field of vision, wide, super wide, etc... So you look in the telescope through a cheap eye piece, the simulation renders a narrow band of site. Get a more expensive eye piece, now the simulation needs to render a wider field of vision.

And then we get to photography which really just kind of flushes the simulation argument down the toilet. So, you look in the eye piece and see a deep space object, in a manner of speaking. You take 100 photos and stack them, you really see the deep space object and its color. Which then deflates this idea of simulation and resolution. Because we are recording something we aren't even seeing without our own eyes. Meaning that we are witnessing a simulation that is providing resolution we can't even see straight up with our eyes and need technology to capture!

This kicks the simulation doesn't need to render to such a high degree everywhere argument to the curb, because the above indicates it is rendering things we can't even see.
I think he stated that the simulation of the universe beyond our immediate visual environment is coarsely rendered until we actually point telescopes at distant stuff, at which point the software driving the illusion increases the resolution at the location being observed to the level we would expect to see. He hasn't presented any evidence to support this hypothesis that I can think of.
Somehow the quantum computer is able to render the field of view based on which eye piece I use. That's pretty darn impressive coding, so impressive it is wickedly absurd. It would require coding to be able to recognize optics, that'd require a near molecule for molecule tracking of materials that are processed.
Not really. All that the code would have to do is to detect whether you would realize that something is simulated or not. It could be done by just simulating your brains, see if that brain notices anything, and if it does, increase "resolution" until it passes. If that sounds complex, consider this: human brain is pretty small. Even if you have to simulate brain a million times for every event, it's still cheaper than simulating every atom of a star for example.

The photography example is a bit harder. That would basically highlight that there could be a very long period of time between the perception and any human figuring out what's going on. At that point, the simulation would either had to be rolled back so that all the photographs of the star could be replaced, or the perception of the photographs themselves would have to be altered to be higher resolution than they actually are.
 
^ ^ ^
Wait a sec...

That reads like the old "brain in a jar" dilemma. I thought that the simulated universe would mean that there is no reality including the brain - that we and everything else would just be computer code - that we only thought we were real, like characters in sym city but with the characters believing they are alive.
 
.....The photography example is a bit harder. That would basically highlight that there could be a very long period of time between the perception and any human figuring out what's going on. At that point, the simulation would either had to be rolled back so that all the photographs of the star could be replaced, or the perception of the photographs themselves would have to be altered to be higher resolution than they actually are.
Just in case that seemed far-fetched - here's another example of history-alteration... about 1 minute in the AI updates the audio and video based on a plain text alteration of the transcript....
 
^ ^ ^
Wait a sec...

That reads like the old "brain in a jar" dilemma. I thought that the simulated universe would mean that there is no reality including the brain - that we and everything else would just be computer code - that we only thought we were real, like characters in sym city but with the characters believing they are alive.
Not necessarily. It could be that maybe part of the simulation is always accurate, regardless of whether anyone is watching. For example, everything on Earth. Or a single brain. But rest of the universe is half-assed and only simulated as well as is required to maintain the illusion.
 
On the topic of randomness, these random faces were completely generated by AI - I think it starts off with a low resolution then progressively adds detail....
https://www.technologyreview.com/20...ces-show-how-algorithms-can-now-mess-with-us/
screen-shot-2018-12-14-at-10.13.53-am-9.png

My take is that these images are obviously either doctored or incrementally generated. Too much regularity, not enough flaws due to whatever etc.

Yes they do look fairly realistic until one stares at them for a moment or two.


An issue with design principles or expert input? Back in the day we were still looking for algorithms for speech recognition. Then, in the sixties, we hit on the idea of throwing bandwidth and sampling rates at the problem. Suddenly speech recognition and simulation became much easier. Faster, more. Whodathunk? After that researchers began to explore the idea of capacity and speed as principles for advances in sensory evolution. Duh.
 

My take is that these images are obviously either doctored or incrementally generated. Too much regularity, not enough flaws due to whatever etc.

Yes they do look fairly realistic until one stares at them for a moment or two.....
I think the first one on the left has heaps of flaws and not a lot of regularity. Actually it has a big flaw - her top is very weird - one shoulder is showing....

When you stare for a moment or two when you already know that they are fake you start to find things that reinforce that belief.

BTW here is a site where you can try and guess which one is real....
https://www.whichfaceisreal.com/
It is flawed because there are some secondary signs that a face is real... ideally the test should only involve how real the face looks... I suspect you will be fooled at least once on that site... especially if you try to choose quickly...

Here is an attempt by AI to put a second fake face next to the main one: (a second face was a sign that the picture is real)
Screen Shot 2020-10-12 at 9.14.32 am.png
Note this GAN based AI has only been public for about 2 years. It works by having a second AI trying to tell whether the face is fake and the first AI tries to fool it. So it becomes more and more realistic...
 

Attachments

  • Screen Shot 2020-10-12 at 9.35.12 am.png
    Screen Shot 2020-10-12 at 9.35.12 am.png
    371.3 KB · Views: 3
...
Please disprove the possibility that most of the stars that aren't closely observed are in fact running on a low resolution mode (perhaps billions or trillions of particles each) rather than every single particle running constantly. (e.g. 10^57 atoms in ALL stars the size of our Sun)....

Please disprove the possibility that you are a figment of my imagination.

You can't.

It doesn't matter, if I want to claim that you are, it's on me to provide reasons to believe so. Just like it's on you to provide reasons to believe we're in a simulation. You are failing at that.
 
Somehow the quantum computer is able to render the field of view based on which eye piece I use. That's pretty darn impressive coding, so impressive it is wickedly absurd. It would require coding to be able to recognize optics, that'd require a near molecule for molecule tracking of materials that are processed.
Not really. All that the code would have to do is to detect whether you would realize that something is simulated or not. It could be done by just simulating your brains, see if that brain notices anything, and if it does, increase "resolution" until it passes. If that sounds complex, consider this: human brain is pretty small. Even if you have to simulate brain a million times for every event, it's still cheaper than simulating every atom of a star for example.

The photography example is a bit harder. That would basically highlight that there could be a very long period of time between the perception and any human figuring out what's going on. At that point, the simulation would either had to be rolled back so that all the photographs of the star could be replaced, or the perception of the photographs themselves would have to be altered to be higher resolution than they actually are.
I don't buy that. Because the simulation needs to know if a lens is super wide or not. And it has to figure this out based on the physical properties of the lens itself.

But let's drop that and move to the placebo... how is the simulation working there? Running a study where people are "cured" by a placebo?
 
....it's on you to provide reasons to believe we're in a simulation. You are failing at that.
There would be demand for video games that are indistinguishable from reality. In the future it would probably be possible so therefore there would be those kinds of games. It would be more likely we're in one of those simulations than being in base reality.
 
Somehow the quantum computer is able to render the field of view based on which eye piece I use. That's pretty darn impressive coding, so impressive it is wickedly absurd. It would require coding to be able to recognize optics, that'd require a near molecule for molecule tracking of materials that are processed.
Not really. All that the code would have to do is to detect whether you would realize that something is simulated or not. It could be done by just simulating your brains, see if that brain notices anything, and if it does, increase "resolution" until it passes. If that sounds complex, consider this: human brain is pretty small. Even if you have to simulate brain a million times for every event, it's still cheaper than simulating every atom of a star for example.

The photography example is a bit harder. That would basically highlight that there could be a very long period of time between the perception and any human figuring out what's going on. At that point, the simulation would either had to be rolled back so that all the photographs of the star could be replaced, or the perception of the photographs themselves would have to be altered to be higher resolution than they actually are.
I don't buy that. Because the simulation needs to know if a lens is super wide or not. And it has to figure this out based on the physical properties of the lens itself.
No, it doesn't. If you can figure out which type of lens you're watching through, so can the simulation. And if you can't, does it matter?

But let's drop that and move to the placebo... how is the simulation working there? Running a study where people are "cured" by a placebo?
Are you referring to regular medical trials? If the simulation was that coarse, I doubt there would be even an effort to maintain realism or fool the subjects into thinking they are not in a simulation.
 
....the simulation needs to know if a lens is super wide or not. And it has to figure this out based on the physical properties of the lens itself.
This can be done in real-time raytracing... materials can have a IOR and the refraction takes into account the geometry:
https://developer.nvidia.com/blog/my-first-ray-tracing-demo/
RTinOneWeekend-1024x577.png


BTW raytracing can involve individual photons... it can look "grainy" (like with film, etc) if not enough photons are invovled but this can be improved with "denoising"....
mara17towards-teaser.png
 
....the simulation needs to know if a lens is super wide or not. And it has to figure this out based on the physical properties of the lens itself.
This can be done in real-time raytracing... materials can have a IOR and the refraction takes into account the geometry:
https://developer.nvidia.com/blog/my-first-ray-tracing-demo/
RTinOneWeekend-1024x577.png


BTW raytracing can involve individual photons... it can look "grainy" (like with film, etc) if not enough photons are invovled but this can be improved with "denoising"....
mara17towards-teaser.png

How raytracing works though is that it traces the "light" from the camera backwards to the objects. Applied to the simulation hypothesis, this means that you trace back light that hits a person's eyes backwards through the telescope to the stars that are light years away. And to compound the problem, as was mentioned before, it's not just the eyes you have to worry about, because you can digitize the image, process it, and have a computer describe it to you so you also have to worry about all the other senses. It gets too complicated to handle as a geometric model. You'd need AI to manage the complexity and keep up the illusion at the point where the information is processed: your brain.
 
....You'd need AI to manage the complexity and keep up the illusion at the point where the information is processed: your brain.
Yeah I think AI would definitely be involved.... AI based physics and rendering (which can be thousands or millions of times faster)... if it is a low "CPU" simulation AI could partly directly control NPCs rather than only physics controlling the NPCs.
 
....it's on you to provide reasons to believe we're in a simulation. You are failing at that.
There would be demand for video games that are indistinguishable from reality. In the future it would probably be possible so therefore there would be those kinds of games. It would be more likely we're in one of those simulations than being in base reality.

This:
It would be more likely we're in one of those simulations than being in base reality.

Does NOT follow from this:
There would be demand for video games that are indistinguishable from reality. In the future it would probably be possible so therefore there would be those kinds of games.


You have to explain how the former necessarily follows from the latter. You have to explain how we get from "video games exist" to "we live in a video game". Do you not understand this?

Second, at the present time, we don't really understand the nature of reality (Copenhagen interpretation vs. Everett interpretation). Therefore, it is premature to discuss the development of simulations "that are indistinguishable from reality", or talk about a "base reality" with any degree of confidence. At a very minimum, you have to define what a base reality is, compare it to the reality we (appear to) inhabit, and provide a way to test your hypothesis that this apparent reality is a simulation.

You are free to believe whatever you like, but if you want to debate this topic and convince others, you have to do more than just endlessly repeat baseless assertions.
 
This:
It would be more likely we're in one of those simulations than being in base reality.

Does NOT follow from this:
There would be demand for video games that are indistinguishable from reality. In the future it would probably be possible so therefore there would be those kinds of games.

You have to explain how the former necessarily follows from the latter. You have to explain how we get from "video games exist" to "we live in a video game". Do you not understand this?
I'll reply to this part of your post. I don't think that

There would be demand for video games that are indistinguishable from reality. In the future it would probably be possible so therefore there would be those kinds of games

can be accurately summarised as "video games exist". Do you understand why I don't think my quote is equivalent to "video games exist"?

I am saying video games that are indistinguishable from reality will probably exist in the far future. "Base reality" means a world that isn't in a simulation. I'll reply more tomorrow but maybe you could respond to this post.
 
Back
Top Bottom