You don't have to be a crude Christian baiter with me since I'm not a believer. Nor am I interested anything other than communication/information processing. Information theory was one of my teaching areas.
I care not a whit about your pseudo analytic interpretations of what I write since it all all fits on the same pin head.
Note the difference your tome vs my short story.
Yes, note the difference between your argument from ignorance and the fact that while I see you often argue what "can't" be the case from stunning positions of ignorance, not once talking about what CAN be accomplished by neurons but only religious insistence on what can't.
It is clear that you do not know how things go from "transistors" to "it can program/build
itself". That's fine, but then you shouldn't make such arguments from ignorance as "it can't program itself".
I can't reach you, and I can't teach you because
you don't want to learn.
You are absolutely a
believer. You are a
believer that you are a slave to
fate.
There's nothing tht can be done to offer the understanding of why we have free will to someone who WANTS to not have control over their own life, not even the small shreds that life may have.
Even so, it's possible to not have control over a great many things, even among the things you do have control of.
All of this is an exercise in representation theory: the fact that multiple systems in drastically different places may have the same mechanics for some purpose, and
operating the states of one solves for the state of the other without directly observing the other operating thus.
Fit this into your interpretation of representation theory of mind.
This paper rejected the analogy between neurocognitive activity and a computer. It was shown that the analogy results from assuming that the properties of the models used in computational cognitive neuroscience (e.g., information, representation, etc.) must also exist in the system being modelled (e.g., the brain). In section Scientific Models in Neuroimaging, we have seen how computational models offer a link between the collected data (e.g., behavioural or neuroimaging) and a possible explanation of how it was generated (i.e., a theory). While the usefulness of computational models is unquestionable, it does not follow that neurocognitive activity should literally possess the properties used in the model (e.g., information, representation). The last section offered an alternative account of neurocognitive activity bringing together the EECS and the formalisms of DCST. While both these accounts individually reject the mind as a computer metaphor, they rarely explicitly work together. The last section has shown how the cooperation between EECS’s characterisation of cognition and DCST’s formalisms offers a sustained and cohesive programme for explaining neurocognitive activity, not as a machine but as a biologically situated organism. Mainly this link was made by focusing on DCM of neurocognitive activity.
From:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8789682/
(please note I actually referenced the material leading to the conclusion I quoted. You actually have something with which to work)
Hand waving and masturbatory head-scratching.
Nothing in that abstract offered any meaning beyond "we, the researcher's, fail to see how the representation theoretical neurocognitive model is actually constructed so we reject even the idea that there is information in there being processed by a mechanism".
It's nothing but a yawn and watching sophists pretend at sophistry.
Meanwhile, us actual engineers are over here ignoring such self-fellating bullshit and just using the model of the neurons to make functional hardware that accomplishes representation, the retention of information, and the execution of function on the basis of "as 'above', so 'below'".
As has been demonstrated, neurons can accomplish any arrangement of switching structures as may be available in a computer.
Saying that neurological systems are somehow less capable of holding wills, or that their wills are somehow LESS full featured
than the tiny little ant-brained things on my hard drive is cute, but wrong.
They hold objects. The objects fulfil as artifacts the definition of "wills". Some of them fulfill the requirement of "freedom", insofar as they shall result in having particular goal reified, a second artifact rendered back alongside the first which matches. Others are constrained insofar as the requirement of them shall not be reified.
Our brains, as they are, are capable of all this and more.