All probabilistic systems likewise can be modeled as deterministic ones in any finite regard, even at scales as vast as our universe, assuming a machine big enough to model it and with enough time to do so, or with features that transform in the correct ways.
That’s another way of saying what I was trying to express. I don’t understand the math that would prove its applicability to all “possible” universes (aka probabilistic systems, if all possible universes ARE probabilistic).
I assume that the size of the modeling “machine” would have to exceed the size of the universe it attempts to model, as it has to contain models of everything including attempts to model it. (How does one “model” a quark without using something bigger than a quark?)
But that would just be an incidental feature. Unless our ACTUAL universe is just a model of an even-more-actual one.
The machine would have to operate by rules we can express in finite extent, which extend to infinite extent, bat are such that which we ourselves cannot provide the "infinity" to extend the rules into, or to process upon. Some mechanism available to us may suffice to get us close-ish?
I have plenty of universes I can hold up that are seen from our perspective only as models, but which exist logically/mathematically/metaphysically.
What I find more useful though is modeling the motion of massive numbers of quarks, using something smaller than that massive number of quarks, because the relevant function of those quarks is itself "simulated" or at least a lot more complicated than the model despite their shared rules of behavior.
And it is this symmetry with the system composed of fewer particles that fascinates us so, because then we can use everything that shares that same property with a greater number of even very different particles will have something that WE may know of it: a metaphysical truth about the thing.
In fact I think the relationships revealed between modular forms and elliptical curves recently within number theory are part of this relationship broadly also between probabilistic and deterministic views of systems, or that there is a strikingly similar bridge between these two?
Ultimately this is what I chalk up the non-disprovability of the contention that the universe is probabilistic or deterministic in the first place... However if this is really the logical root of the non-disprovability, then the point still stands, that free will as we understand and experience it would somehow be incompatible with one view or the other.
The conclusion that the compatibilist brings to the core, though, is how complicated the relative representations are.
If I were to compare this, I would compare this to the aperiodic structures that can be formed of chevrons. Chevrons can be arranged in a number of aperiodic ways, but are not obligated to be aperiodic. Chevrons seem "more probabilistic" than the Spectre, which has fewer valid placement regimes and all of which obligated infinite aperiodicity.
Despite the fact that the cardinalities are likely the same, there is still a larger number of ways to position a smaller number of tiles for the Chevron, and some seem much more "arbitrary". Part of my brain wants to think about the way low primes are relatively less "arbitrary" than high ones, with much more chaotic interaction in a constricted prime sieve, for all low primes have the same measure of multiples (countably infinite) as higher ones do (also countably infinite).
Math would suggest that there is a compact way to express both the rules and the initial condition, with arbitrary-seeming selections made in few or even only one singular place according to each of the precondition and the transform rules on the field it forms, and how "compact" this ends up being and how "accessible" the arbitrary-seeming selection is seems to be the general extent of how we end up measuring probabilistics at all.
Because we can model that precondition not-at-all, because it appears entirely and absolutely undecidable (and if the universe is a growing bubble of interaction amid infinite orthogonal bubbles of growing interaction on an aperiodic field, ARE literally undecidable), we model that part almost exclusively with probabilistics and we use fairly probabilistic language to discuss it.
When it pertains instead to the deterministic stuff that seems to change in very predictable ways, when there are statistically measurable effects going on, we instead model it with the mostly deterministic version.
Ever philosophy seems to wrap the probabilistic seeming stuff in determinism, or at least in attempting to imagine, in theory, what systems could be assuming you knew a compact way to express a precondition from a reasonably accessible position from the reference, and that this precondition's nature implies (such as the nature of a field as a spectre field), even if the specific fields we look at or the places we see them in do not comport to any place we can see.
The end result ends up being the suggestion of a razor, in fact Occam's razor: select the least probabilistic theory of all those presented; the theory that assumes the least amount of not-math numbers.