DBT
Contributor
You have provided red herrings about neurology and cognition which have been discussed and addressed and flushed using dwarf determinism: neurons are not necessary nor applicable to the concept in the first place, given the definition of a will here as "a series of instructions into a requirement".however, there has been more than sufficient material provided on neuroscience, cognition, agency expert, analysis, arguments from incompatibilism, etc,
Your arguments from incompatibilism are not-even-wrong because they are circular and do not address the definitions of compatibilists. "No Libertarian free will" does not get you to "therefore no compatibilist free will".
The only way you get to "therefore no compatibilist free will" is to actually pick up compatibilist definitions, unpack the whole definition, and show a compatibilist construction that yields a contradiction that states a will as both free AND unfree in the same way.
Ah, now here's an interesting point: I do claim computers can have wills. I do claim computers' wills can be free or unfree. It is in fact a corralary of the definitions I use.by doing so it can be claimed that computers have free will and other absurdities
You consider it an "absurdity" to claim that humans have "free will*"! Why would I consider someone who believes such nonsense so religiously on ANY thing they deny "free will*" of?
It seems you have some religious need for computers to lack these capabilities? What is your grounds for considering it an "absurdity" beyond it merely being something you find distasteful for whatever reason?
We are mechanical systems in a deterministic environment. Computers are mechanical systems in a deterministic environment.
Our universe has a lot of absurd things in it. You are an absurdity. As am I. Nature does not abhor absurdities but is in fact chock full of them, it being an absurdity itself.
Rather than asking whether it's an absurdity perhaps you would be better served by asking "is this absurdity how the absurdity of nature happens to function?
And the answer to that question is going to be "yes, computers have wills; those wills may be free."
See, this is an interesting part which in fact proves my contention that the brain, particularly the part of brain that I impugn as "the seat of consciousness" is in fact capable of production, execution, and cessation of "wills". I feel bad for Vlad dying like that, to be continued by the less interesting parts of his brain.Vladimir, for example, is a patient whose frontal lobes were surgically resectioned after a train accident. As a result, he is unable to form a plan, displays an extreme lack of drive and mental rigidity and is unaware of his disorder
He got shoved into "the Chinese room" or "reactive automatic action" or "mere reaction".
It's like what happens in an office when the boss just up and quits and the employees are all too dumb to take over for them. It pretty well indicates that before the frontal lobe went off, there was a boss.
It just happens that the unreliable interpreter function, essentially the guy that relays messages between the boss and the employees well, that guy is still alive and so nobody is any the wiser that the boss is gone. Nobody ever really saw the boss directly.
But it's interesting you use as an example something that proves that the brain normally possesses the regulatory ability to form a plan, execute the plan, etc..
At any rate, Vlad still does have "wills" but they are more simple linkages that, lacking the ability to form complex wills, are much less likely to be free.
So Vlad still has a will, the will is still either free or not, he just now lacks the capacity to exert the regulatory control over them to displace the reactive wills with more measured and considered ones.
In the same way, the line following robot doesn't need the ability to create wills for themselves at all for them to hold a will, for the will to be free. When you calculate the causality of that will being held, you come to me and my will, held by myself freely, and created by my prefrontal cortex (me, btw), and you can say "the robot has the will to follow the line. That will was 'free' as of the last time I looked at it. It held the will to follow the line because I held the free will to put that will there, and I did so 'because I wanted to'."
Now, "because I wanted to" is a pretty big statement. It encodes a lot of stuff and is doing a lot of work here. But you are not yet to the point where we can discuss what that means, implies, or any of that. It is not germane yet to the discussion as to where wills come from and why and what impacts that has on 'responsibilities'.
In the end, the only impact that it really has between where a drive comes from is whether we let you go "free", whether we put you in a corrective environment, or whether we put you in a hole.
*Really, "wills which have binary freedom value"
So, you haven't understood a word that's been said.
This issue comes down to agency, the mental state of 'Vlad' is not subject to his will but a reflection of the physical state and condition of his brain, which - conscious mind having no access to its means of production - Vlad the conscious mind, the man himself has no access to or control over.
The physical state of the brain, chemical imbalances, structural damage, lesions, electrochemical activity, etc, determines the state of him, his experience of the world, his thoughts, feelings, deliberations and actions, all fixed/set in every incremental instance in time by the physical state of brain. not will, not free will. Function does not equate to will. Will has no agency within the information processing activity of a brain. Brain condition, not will, equates to output.
''The increments of a normal brain state is not as obvious as direct coercion, a microchip, or a tumor, but the “obviousness” is irrelevant here. Brain states incrementally get to the state they are in one moment at a time. In each moment of that process the brain is in one state, and the specific environment and biological conditions leads to the very next state. Depending on that state, this will cause you to behave in a specific way within an environment (decide in a specific way), in which all of those things that are outside of a person constantly bombard your senses changing your very brain state. The internal dialogue in your mind you have no real control over.''
''Having made my choice or decision and acted upon it, could I have chosen otherwise or not? [. . . ] Here the [compatibilist], hoping to surrender nothing and yet to avoid the problem im-plied in the question, bids us not to ask it; the question itself, he announces, is without meaning. For to say that I could have done otherwise, he says, means only that I would have done otherwise, if those inner states that determined my action had been different; if, that is, I had decided or chosen differently.
To ask, accordingly, whether I could have chosen or decided differently is only to ask whether, had I decided to decide differently, or chosen to choose differently, or willed to will differently, I would have decided or chosen or willed differently. And this, of course, is unintelligible nonsense [. . . ] But it is not nonsense to ask whether the cause of my actions my own inner choices, decisions, and desires are themselves caused.
And of course, they are, if determinism is true, for on that thesis everything is caused and determined. And if they are, then we cannot avoid concluding that, given the causal conditions of those inner states, I could not have decided, willed, chosen, or desired other than I, in fact, did, for this is a logical consequence of the very definition of determinism. Of course, we can still say that, if the causes of those inner states, whatever they were, had been different, then their effects, those inner states themselves, would have been different, and that in this hypothetical sense I could have decided, chosen, willed, or desired differently but that only pushes our problem back still another step [Italics added].
For we will then want to know whether the causes of those inner states were within my control, and so on ad infinitum. We are, at each step, permitted to say could have been otherwise, only in a provision sense provided, that is, that something else had been different but must then retract it and replace it with could not have been otherwise as soon as we discover, as we must at each step, that whatever would have to have been different could not have been different (Taylor, 1992: 45-46).''