Marvin Edwards
Veteran Member
@Marvin, this is to discuss a function of math, with hypothetical but describable aspects, so as to not need to do the work of describing it.
If we wish to communicate successfully, we may need to do the work of describing it. If we have gained some proficiency in describing things, then it will require less work than it would to encode the description in uncommon language, and then require others to decode it. Ideally, we want to find the simplest, clearest, and effective way to describe things.
The agent here is not necessarily a person but exists in the system it is a part of specifically to prove out a point.
Then we're discussing an analogy. Deriving any conclusions from an analogy, that can be applied to the actual thing, will depend upon the accuracy of the analog.
Here we assume that the system forks itself along each "possible" future, and for the sake of brevity of calculation, prevent recursive forkings.
I think what you mean is to prevent repetitive forks. We want to avoid going down the same road twice. In computer programming, the only example of recursion I can recall is traversing a series of files starting at a given root file folder (which can be C:\ if we want to get to them all). The logic would be to open the file and traverse the contents of that file, which can include other file folders. When we get to the first sub-folder the routine calls itself to handle that sub-folder. When a routine calls itself, it is recursive. Each call saves its current state on the stack before entering the routine again, so when the sub-call is finished, it returns to the prior state of the calling routine which simply picks up where it left off with the next item in its folder's content. Every folder and file is traversed just once, and all with the same small routine. And the logic works on any hierarchical file system.
So assuming I am personally the agent with future sight in some LIFE simulator, I would say "I am hungry and shall die unless I 'eat food' in twenty frames, and I must decide in 10 frames which 'food' I will attempt to access, as all access attempts take 10 frames."
Then the system goes into a resolution state wherein at T+10, all values of 'the food' are substituted into "decision at T+10", the system produces the results of all variances through T+20, and then one of these with the best statistics is selected at T+10.
Cool. I can understand that as an analog to a computer program, where T+10 can be instruction 10 or a call to a complex set of instructions at location 10, and T+20 can be another routine in the program.
This represents perfect forward knowledge on a finite range.
Okay, so now I'm visualizing T+10 as performing a subroutine of instructions from T+11 through T+20, with a return at the end of T+20 to where we left off in the T+10 routine.
In T+11 through T+20 we have a finite set of options, and we calculate the value of the estimated benefits of choosing each option, so that when we return to the T+10 routine the next instruction loops through these scores and returns the option with the highest score (the best outcome).
It says "perfect finite forward knowledge" is a thing that determinism does not rule out, nor decision upon such perfect knowledge.
Well, in the computer program, the forward knowledge will be "perfectly" calculated, but could still be entirely false due to bad logic (coding error) or incomplete or missing information (garbage in, garbage out).
Whether the program is full of bugs and the information is garbage, or not, will be causally necessary from any prior point in time. However, the fact that it is causally necessary from any prior point in time will not help us to find the bugs or correct the information!
Universal causal necessity/inevitability is a logical fact, but neither a meaningful nor a relevant fact. It is pretty useless. The intelligent mind simply acknowledges it, and then forgets all about it.
It's no less a choice.
Correct. Except in the computer analogy we find no interest in the outcome. Computers don't care what we program them to do. Only we care about that. We may imagine that the program cares about it's choices, but only the end-user and the programmer care about what the computer chooses to do.
So if we can recognize that perfect finite forward knowledge is not forbidden of "determinism", we can recognize that limited finite forward knowledge is also sensible as a concept and not the nonsense some claim it is, it just means that it's imperfect, and that our wills remain "provisional" even after they are selected.
Well, yeah. Forward knowledge (or perhaps it is better to avoid "knowledge" and use the term "speculation", "estimate", "prediction", etc.) will always be finite, and sometimes it will be perfect but most of the time it will be imperfect.
And it sounds correct to say that even after we have made the choice, things are still provisional or "iffy", until we carry out that choice and see what actually happens and compare that to what we thought would happen.
The only thing that "future sight" changes is that the values produced cannot be wrong about the freeness of the will.
But "freeness of the will" is still not clarified unless we state what it is that the will is supposed to be "free of". If there is no meaningful or relevant constraint, then freedom is the assumed state of things. Coercion is a meaningful and relevant constraint upon our freedom to choose for ourselves what we will do. So, in the absence of coercion (and other forms of undue influence), we are free to choose for ourselves what we will do.
In the computer analogy, we have no coercion. The program will do whatever we tell it to do, without experiencing any fear that someone will shoot it if it makes the wrong choice.