• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Quantum Mechanics- The Science of Absolute Connection

We get the waveform not only with photons but electrons, and we get it when we fire one electron at a time.
Yes, one electron at a time is still going to be diffuse, because an election is made of other things, many things in fact, propagating through space.

Because C or D don't get forced as A or B at the sensor, they hit the wall, the back wall, as C or D rather than A or B.

But whether it's C->A, or D->B, it's either C->A or D->B the whole way.

The sensors just force the collapse, either from a C to an A or a D to a B. You just can't tell the difference between a C or a D until it becomes an A or a B.

That's superdeterminism.
 
I’m not sure what you mean by “diffuse,’ but superposition of different states is a fundamental property of qm and it is precisely mathematically defined. Superposition is bedrock to the theory. If you reject it you are rejecting QM in toto.


For instance, let's look at two entangled particles. If we, instead of seeing it as "both particles have both spins until they don't" seeing it as "both particles have exactly one value, the value of which has a fixed mass, one containing the value and the other containing the anti-value, and when the first thing that happens to them happens, that value is applied in that event to produce the outcome." One particle spins up, the other spins down.

I don’t completely follow this, but it sure sounds as if you are comparing entanglement to the classical example of paired gloves in different boxes I gave — that the value is preset. QM says quite the opposite. If you want to go with preset, you’ve got to reject QM and develop a new theory. Hossenfelder claims superdeterminism is that new theory, insofar as I understand her argument.
 
We get the waveform not only with photons but electrons, and we get it when we fire one electron at a time.
Yes, one electron at a time is still going to be diffuse, because an election is made of other things, many things in fact, propagating through space.

Because C or D don't get forced as A or B at the sensor, they hit the wall, the back wall, as C or D rather than A or B.

But whether it's C->A, or D->B, it's either C->A or D->B the whole way.

The sensors just force the collapse, either from a C to an A or a D to a B. You just can't tell the difference between a C or a D until it becomes an A or a B.

That's superdeterminism.

Are you specifically referring here to quantum superdeterminism, as argued for by Hossenfelder?
 
Or the cat is both alive and dead in different branches of reality, each cat quantum-entangled with a different version of the observer. Weird as this is, all the other weirdness of QM goes away. The MW is local, determinstic and realist, with no spooky action at a distance. And no wave function collapse either, which otherwise is inexplicable.
Many worlds? Complete with branching into new worlds as time goes by? That's almost too much for me to take.
Why? It violates no known physics, including conservation principles, and it does away with indeterminism, nonlocality and antirealism.
It's the multiple coexisting universes that's the problem, universes that multiply as time goes on. If it's a finite number, then that number grows enormously over time, though if it's an infinite number, it stays infinite.
If space is also quantum. one could argue that the number of possibilities is also finite, albeit large
 
If the glove experiment alluded to earlier were quantum, it would go something like this:

The gloves in the boxes are quantum-entangled, such that each glove in each box is in a left-right superposition.

I separate the boxes by a long distance — it would be all the way across the universe.

Now someone opens one of the boxes. This act instantly collapses the wave function for both boxes — even if they are on separate sides of the universe (non-locality or spooky action at a distance). If opening the first boxes yields a right-hand glove, the wave-function collapse forces the second glove to be left-handed. Note again that these states of affairs were not pre-set, as in the classical example. Until the first box was opened, the gloves were in superposition and neither left- nor right-handed. This is anti-realism — the gloves only become “real” in the classical sense at wave-function collapse.

And the collapse is indeterministic — unlike a coin toss, where the odds for heads or tails is 50/50, the collapse is truly random. If one knew all the variables before tossing a coin, one could predict accurately the outcome of the toss. Not so in QM. since there are no variables to know prior to opening the box.

Classical: localism, realism, determinism.

Quantum: non-locality, anti-realist, indeterministic.

Einstein objected to this and said there were hidden variables we didn’t know about that pre-set the values of the gloves, so QM was incomplete. Years later it became possible to experimentally test Einstein’s claim. Result: there are no hidden variables, and indeed there cannot be any hidden variables, even if QM turns to be wrong and is replaced by another theory. So Einstein was wrong, BUT —

The many worlds interpretation eliminates non-locality, and restores realism and determinism, without resorting to the experimentally forbidden hidden variables.

Superdeterminism exploits a loophole in the experimental setup that refuted hidden variables to restore a classical picture of reality. I’m curious to know whether Jarhyn, when discussing superdeterminism, is specifically referring to this idea, championed by Sabine Hossenfelder.
 
I’m not sure what you mean by “diffuse,’ but superposition of different states is a fundamental property of qm and it is precisely mathematically defined. Superposition is bedrock to the theory. If you reject it you are rejecting QM in toto.


For instance, let's look at two entangled particles. If we, instead of seeing it as "both particles have both spins until they don't" seeing it as "both particles have exactly one value, the value of which has a fixed mass, one containing the value and the other containing the anti-value, and when the first thing that happens to them happens, that value is applied in that event to produce the outcome." One particle spins up, the other spins down.

I don’t completely follow this, but it sure sounds as if you are comparing entanglement to the classical example of paired gloves in different boxes I gave — that the value is preset. QM says quite the opposite. If you want to go with preset, you’ve got to reject QM and develop a new theory. Hossenfelder claims superdeterminism is that new theory, insofar as I understand her argument.
No I really don't. Superdeterminism is already accepted as "unfalsifiable". While this doesn't settle the debate, the point is that "all finite systems are represented in the set of deterministic systems". All of them. The infinity of all finite systems are "deterministic".

That is the specific way you describe this finite system deterministically.

It doesn't matter that it grossly complicates aspects of QM, as long as the math works out.

This is not to say that the dice are not rolled at the moment, merely that you could replay the universe by replaying those rolls on those dice at those moments of the system.

Which is the whole point of my argument as regards determinism.

The Fine Structure Constant is interesting because it is, perhaps, the one most perplexing thing in the entire universe, be a use it doesn't seem to come from any of the math we understand or even otherwise have access to, and without it, nothing else can be calculated about the universe, because the speed of light is relative to the unitless fine structure constant.

It comes from something apparently abjectly weird, a true "measured" apparently irrational number, and in some ways I think potentially the best evidence we have available for other universes outside of - or external to - our own somehow. It implies many other inaccessible numbers from our perspective. And if we discover this number is of finite complexity? That would make it all the more weird, because while it would not necessarily prove the existence of other universes... why that one?!? It's not just a number drawn out of a hat, it's a number drawn out of a hat that does not exist otherwise but by the evidence of this "number" being fundamental to how the universe functions. Maybe it's the only number under which math occurs in which we exist to question why that number exists, but like still... what about the other ones?
 
We get the waveform not only with photons but electrons, and we get it when we fire one electron at a time.
Yes, one electron at a time is still going to be diffuse, because an election is made of other things, many things in fact, propagating through space.

Because C or D don't get forced as A or B at the sensor, they hit the wall, the back wall, as C or D rather than A or B.

But whether it's C->A, or D->B, it's either C->A or D->B the whole way.

The sensors just force the collapse, either from a C to an A or a D to a B. You just can't tell the difference between a C or a D until it becomes an A or a B.

That's superdeterminism.

Are you specifically referring here to quantum superdeterminism, as argued for by Hossenfelder?
Yes. I could swear I said so earlier. She's a fairly brilliant mathematician, even if she hasn't approached the language of compatibilism with the same rigor as she approached number theory.

What is clear to us is that we do not know how to determine certain things, and there are certain things that we cannot possibly calculate. We can agree on that, but we cannot agree that there is no fundamental mechanism of determination of this stuff. Something is happening by some law, even if I don't understand what that law is yet. Even if it is not a law I can ever directly measure, even if that law is fucking weird, like "f(e,1/137.036..." and let all our units reference the constants to be "1" since they are arbitrary units otherwise, everything collapses down to (fsc)=1/c.

We don't know that number. We know we can use it for calculating masses, and then convert, but the numbers get very large very fast, to the point where our numbers are just faster, if sloppier, to calculate.

If you want the absolute honesty out of me, I think there are patterns in the math that we know that express in reality in what might be very bizarre ways from our perspectives, and I don't think we choose where in that pattern we started, but I do know that even if the facts we know of math imply events occurring some way in our world, what that has made true is that we are entities existing as processes which are not jerked around but will become as we have happened, by a bunch of absurd and chaotic shit happening to conspire in a chaotic way as to start figuring that math out.

My thought when it comes to the idea of quantum mechanics is that it has some really cool discussions of systems which produce behaviors which we can only access by doing a difficult calculation for.

It's like the difference between having a floating point instruction calculate something in a single frame vs having a "dwarves water computer" calculate that same number. From our point of view, an infinitely complex number is calculated to the precision we understand all the other numbers with relation to the fine structure constant, 1, and pi in an instant, to ridiculous precision, including really big and small numbers.

To me, that's a pretty big deal, but it doesn't say much about stuff that happens on large scale beyond the idea that big things can be set on tiny triggers, even quantum-scale ones.

Also, some folks apparently figured out how to bias a quantum event? I think I saw that in the news a few days ago. It was from a reddit post though, so you're gonna have to dig it up if you really care to know more. I'll trust it about as far as it doesn't get refuted or does turn up in few years in the news, assuming AI doesn't live up to the hype.
 
As the paper points out, superdeterminism argues that there is a vast conspiracy of nature begun at the big bang to fake the result of every Bell’s test. It’s crazy, just like Last Thursdayism in religion.
 
I’d also note that if superdeterminism is true, there can be no free will of any kind, including the compatibilist free will that Jarhyn and I support.

This is because, so far as I can tell, superdeterminism leads to modal collapse; i.e., all contingently true propositions collapse to necessary truths about the world. This, of course, is outrageously at odds with basic logic — it means that Oswald killing Kennedy is just as much a necessary truth as triangles having three sides.

The reason is that superdeterminism eliminates even the possibility of counterfactuals. I may say, Today I accepted a job in Boston, because the same job in New York paid less. It’s true that the job in New York was “determined,” so to say, to pay less than the same job in Boston; but things didn’t HAVE TO BE that way; and had the job in New York paid more, I would have accepted that job instead. If, OTOH, it HAS TO BE that the job in New York pays less, then I lack all counterfactual possibilities and I am a puppet. Of course, if that’s the way the world is, then so be it; but superdeterminism gives us no reason to believe that the world is that way, because there is no reason to believe that superdeterminism is true and in fact, as Jarhyn himself pointed out, it’s unfalsifiable anyway, just like last Thursdayism, or the idea that we are in a simulation for which there can be no evidence, even in principle.
 
I'm just splattering things on the wall here, but if there were an infinite number of realities, would that result in a net zero need for energy for the entire system?
 
I’d also note that if superdeterminism is true, there can be no free will of any kind, including the compatibilist free will that Jarhyn and I support.

This is because, so far as I can tell, superdeterminism leads to modal collapse; i.e., all contingently true propositions collapse to necessary truths about the world. This, of course, is outrageously at odds with basic logic — it means that Oswald killing Kennedy is just as much a necessary truth as triangles having three sides.

The reason is that superdeterminism eliminates even the possibility of counterfactuals. I may say, Today I accepted a job in Boston, because the same job in New York paid less. It’s true that the job in New York was “determined,” so to say, to pay less than the same job in Boston; but things didn’t HAVE TO BE that way; and had the job in New York paid more, I would have accepted that job instead. If, OTOH, it HAS TO BE that the job in New York pays less, then I lack all counterfactual possibilities and I am a puppet. Of course, if that’s the way the world is, then so be it; but superdeterminism gives us no reason to believe that the world is that way, because there is no reason to believe that superdeterminism is true and in fact, as Jarhyn himself pointed out, it’s unfalsifiable anyway, just like last Thursdayism, or the idea that we are in a simulation for which there can be no evidence, even in principle.
That has not been my interpretation; rather, my interpretation of it had been that things being the way they are is still reliant on *prior events*. It being a necessary truth can still be "the prior antecedent necessary truths" and not obvious that the outcome would come from the input: the appearance of the information only accesses the outcome because of the action of the system in the middle, unless I misread something. That would mean that even if we are points on a line in some large piece of continuous and well ordered mathematical series, we can still have responsibility towards the continued calculation. It can be as mathematically certain, as "necessary" that the job on new York paid less, but will always be as necessary that you wouldn't know that, would need to access information, think about which job paid less, and select that job.

My understanding is that even "mathematical necessity" does not change anything because the outcome can't be read by the oracle anyway.

The point of anything proven "unfalsifiable" is that everything about the unfalsifiable thing becomes mathematically applicable. "X is proven unfalsifiable; Y would imply X is false, therefore !Y; Y is either unknowable or true."

While you can't prove many things with the unfalsifiable discussion of an invisible pink unicorn since the character has no bounds, but you CAN do some interesting conclusions using something whose mechanisms can have questions asked about them, in the same way I can prove "if there is a god they *could* be a piece of shit like me" with the unfalsifiable theory of simulationism: our universe can be a simulation so all discussions about simulations apply.
 
As the paper points out, superdeterminism argues that there is a vast conspiracy of nature begun at the big bang to fake the result of every Bell’s test. It’s crazy, just like Last Thursdayism in religion.

I prefer to think of 3-particle entanglement, e.g. the GHZ experiment. No probability analysis required as in Bell's 2-particle test. The GHZ result is ALWAYS paradoxical. Briefly, after measuring three particles you can deduce with certainty that an unmade measurement would have yielded No. And deduce with certainty that that measurement would have yielded Yes. Thus the conclusion that the measurement "acted at a distance."

But there is no paradox. On Monday you create two entangled particles; on Tuesday and Wednesday you measure them. Wednesday's measurement is seen to affect Tuesday's, or vice versa. But this is simple causality, if you allow retrocausality. Wednesday's measurement affects the original particle entanglement on Monday. This interpretation is well-known to physicists but it seems too spooky to pursue. Retrocausality is spooky, but that doesn't make it wrong.

...
Spooky action at a distance, indeterminism, antirealism in Copenhagen … many worlds in Everett‘s relative state formulation … pretty weird to me! :alien:
...
All those things are understood. Spooky action at a distance — i.e., nonlocality — is not like those things.

Everything is local. As long as you imagine some of the cause-effect arrows reversed.

Once you lose the "hang-up" about time flowing, and accept that everything -- "past" and "future" -- exists all at once, you don't need a Many-Worlds Interpretation or a Probabilistic Interpretation. There is one Universe and it just ... is.
 
Everything is entangled. But, except for careful experiments like GHZ or Bell's test (and usually done at very low temperatures), the entanglements become diluted and weak. They resemble noise rather than signal. A major challenge in building quantum computers is just maintaining a high signal-to-noise ratio.

There is an important insight that developed only recently. Macroscopic entanglements are exploited in nature! A good example is photosynthesis. The efficiency (avoidance of waste heat) of the reactions involving photons and excitons in chlorophyll are far higher than those of any man-made engine. Using quantum principles, excitons "tunnel" their way to the appropriate reaction center. With new insight about retrocausality the tunneling transaction can be viewed in the opposite direction.

On some level, chlorophyll solves a computation problem much as Shor's Algorithm or Grover's Algorithm solves problems on a quantum computer. And chlorophyll does this at room temperature using ordinary (though complicated) molecules.

Once one accepts that nature has evolved to exploit macroscopic entanglements, interesting ideas come into view. But I'd better stop here before getting banished to the funny farm! :-)
 
Yes, I recently read about the photosynthesis stuff. Really fascinating!
 
That has not been my interpretation; rather, my interpretation of it had been that things being the way they are is still reliant on *prior events*. It being a necessary truth can still be "the prior antecedent necessary truths" and not obvious that the outcome would come from the input: the appearance of the information only accesses the outcome because of the action of the system in the middle, unless I misread something.

You and I both support compatibilist free will, though it may be that we disagree of some of the necessarry or sufficient conditions for it. One condition I see is the reality of genuine counterfactual possibilities, possible but non-actual worlds. Superdeterminism, if I understand Hossenfelder’s explication of it, excludes such worlds.
 
That has not been my interpretation; rather, my interpretation of it had been that things being the way they are is still reliant on *prior events*. It being a necessary truth can still be "the prior antecedent necessary truths" and not obvious that the outcome would come from the input: the appearance of the information only accesses the outcome because of the action of the system in the middle, unless I misread something.

You and I both support compatibilist free will, though it may be that we disagree of some of the necessarry or sufficient conditions for it. One condition I see is the reality of genuine counterfactual possibilities, possible but non-actual worlds. Superdeterminism, if I understand Hossenfelder’s explication of it, excludes such worlds.
"Real" "counter-factual" and "possibilities".

For me, all that is required for "counter-factual" is "discorrolated facts".

For instance I can say "A does not imply B or !B", and just because there are things like that, I can say "the creation of will A DOES NOT IMPLY precondition B of will as true or false". It's not the randomness that allows this, nor probabilistics in the least, but the availability of that kind of interaction where "A does not imply B or !B"

The condition you see is unnecessary.

What is necessary for real choice is only that reaching the next moment of our universe requires the previous moment: sequential determinism.

We might have to revisit the topic, though, if someone proves all observed math is "symmetrical"; it would imply an ability to calculate exactly the 10^10^10th digit of pi without calculating the fourth one.

Personally I'm hoping and even expecting P=NP is only "trivially" true and that it's proven that while you can represent P in NP terms, the NP=NP terms gives you worst case brute force on P: that knowing why doesn't actually give any leverage in the same way that superdeterminism being true would imply a ridiculously bloated initial state, and in the same way as Godel's Incompleteness Theorem rules out systems containing exact implementations of themselves.

Because not every series converges to a single answer over time, not every system can be predicted forward through time except at the speed of one second per second.

The only time that changes is when the system is bound to some mathematical property that IS correlated in some platonically certain way. This in fact touches Swammerdami and their discussion of entanglement, because it may imply entanglement is much more "involved" than most might expect in the process of knowledge in general.
 
Back
Top Bottom