what you imagine or simulate does not necessarily relate to the physical world and its attributes and principles
No it doesnt, which is exactly what logically allows the will to be unfree: you simulate a provisionally free will, but it is
provisionally free, which is to say
not free under all circumstances, not
necessarily free, but
provisionally so.
The provisions are simple: that the error in the simulation's outcome calculations is low enough, and that the state is as imagined.
When what you imagine is imagined through a faithful simulation according to the rules of reality, and accounts for the current state, the will is
actually free with respect to the goal instruction(s).
But what is certain is that no will can be actioned which was not developed. Wills get developed through this process of simulation and logic upon it's outcome.
If the menu contains no main battle tank, your will to order one is not free: your model of reality is bad.
If the menu contains steak but they are out, your will to order a steak is not free: your state definition was bad (as was the state declaration on the menu).
All having an inaccurate simulation gives you is an inability to design wills that even may possibly be free, so rather than the possible errors in simulation leading to the global nullification of freedom, the errors in simulation create a situation where only some wills are rendered unfree.
Indeed, only one will of a set of mutually exclusive wills may ever be free, insofar as part of the state projection is "if the state is mostly like this AND
I decided after the deliberation process that I like this alternative".
Note that italic part. This part can be inaccurate, is in fact assumed at the beginning to be inaccurate for all but one of them. But the only way to get an understanding of what may happen is to assume it CAN happen IF you want it to happen, and then decide of what CAN happen IF you want which one you actually want.
Physical principles allow this quite easily since the physical principles of the universe don't seem to change day to day.
Wanting to do X is fully determined by these prior causes.
One of the things that sums to prior cause is that they opened the menu, projected a simulation of all the options they looked at, and calculated of those options which they wanted to do.
Without generating Z, and then Y, and then X,
and choosing between them by some process, there is no "wanting to do X". It is fully determined by prior causes, but something determined by prior causes can still be the prior cause that determines something else.
You don't need to choose your proclivities for your proclivities to be the basis of your own choices.
When YOUR proclivities result in YOU making choices, and we don't like the choices you make, then we and perhaps also you may react to adjust proclivities across the system. And so there is regulatory control over proclivities.