• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Compatibilism: What's that About?

DBT,

I’ve asked this before, but I don’t believe you addressed this.

Where did brains come from? What are they good for?

Where did life come from? What is it good for? The brain/central nervous system generates a mental map or representation of the environment and self, which enables an organism to navigate its environment, find food, shelter, mate, etc.

Basically:
Principle 1. The brain is a physical system. It functions as a computer. Its circuits are designed to generate behavior that is appropriate to your environmental circumstances.

The brain is a physical system whose operation is governed solely by the laws of chemistry and physics. What does this mean? It means that all of your thoughts and hopes and dreams and feelings are produced by chemical reactions going on in your head (a sobering thought). The brain's function is to process information. In other words, it is a computer that is made of organic (carbon-based) compounds rather than silicon chips. The brain is comprised of cells: primarily neurons and their supporting structures. Neurons are cells that are specialized for the transmission of information. Electrochemical reactions cause neurons to fire.

Neurons are connected to one another in a highly organized way. One can think of these connections as circuits -- just like a computer has circuits. These circuits determine how the brain processes information, just as the circuits in your computer determine how it processes information. Neural circuits in your brain are connected to sets of neurons that run throughout your body. Some of these neurons are connected to sensory receptors, such as the retina of your eye. Others are connected to your muscles. Sensory receptors are cells that are specialized for gathering information from the outer world and from other parts of the body. (You can feel your stomach churn because there are sensory receptors on it, but you cannot feel your spleen, which lacks them.) Sensory receptors are connected to neurons that transmit this information to your brain. Other neurons send information from your brain to motor neurons. Motor neurons are connected to your muscles; they cause your muscles to move. This movement is what we call behavior.

In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.''


You believe that our impression that we can choose among competing alternatives is an illusion, and that a person really has no more choice than a rock rolling down a hill. Is that not right?

I'm saying that determinism doesn't allow alternate actions in any given instance in time. Compatibilists agree with acknowledge this, which is why the compatibilist definition is ''to act in accordance (unimpeded) to one's will'' and not the ability to have done otherwise.

If so, you owe an explanation of where the alleged illusion of free choice comes from.

If your hard determinism is correct, I have argued that brains, minds, and the illusory impression of free choice, would be useless. They would have no survival value to them and hence there would be no selective pressures to evolve such organs and abilities.


It's not ''my hard determinism'' - I see it as a matter of compatibility, or not.


But we have such organs. Why? Not all of evolution is driven by natural selection. Some results are pure accident (genetic drift) and some phenotypic outcomes are accidental consequences (spandrels) of other selection-driven outcomes. Clearly neither is the case for complex brains. Brains, minds, and the ability to make choices were cumulatively selected for by untold generations of descent with modification.

But why? In your world, brains do nothing for us. But in reality, complex brains are incredibly energy-intensive. They are expensive to make and maintain. Our big brains are what makes human birth perilous for the mother and why humans must be cared for by their parents for a very long time, much longer than most other species.

You misrepresent the argument. Which is simply a question of compatibility, whether free will is compatible with determinism.

The compatibilist claims it is, giving their definition of compatibility as 'acting in accordance with one's will/unimpeded actions" - while the incompatibilist points out why this definition is flawed and therefore inadequate to prove the proposition.

Which is what I have been doing in countless posts, explanations, quotes, references, links to studies, neuroscience, the nature of cognition, volition, movement, action, etc, etc.....

For instance:
Movement Intention After Parietal Cortex Stimulation in Humans;
''Parietal and premotor cortex regions are serious contenders for bringing motor intentions and motor responses into awareness. We used electrical stimulation in seven patients undergoing awake brain surgery. Stimulating the right inferior parietal regions triggered a strong intention and desire to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region provoked the intention to move the lips and to talk. When stimulation intensity was increased in parietal areas, participants believed they had really performed these movements, although no electromyographic activity was detected. Stimulation of the premotor region triggered overt mouth and contralateral limb movements. Yet, patients firmly denied that they had moved. Conscious intention and motor awareness thus arise from increased parietal activity before movement execution.''
 
Last edited:
...It only bothers me because I have worked professionally in the field of artificial intelligence, and this kind of overimplification is wrong on so many levels.

If you work in the field of artificial intelligence, you should know that free will is not a factor. That processing information and selecting an option according to sets of criteria has nothing to do with free will.

You really should be more cautious in making blanket statements about a field that you have no expertise in. The usefulness free will in robotics has long been an open question, and it is a popular topic in AI. Here is a well-known 1999 paper by AI pioneer, John McCarthy: FREE WILL-EVEN FOR ROBOTS

I can't access the page. Not that it matters. Unless there has been some miraculous breakthrough, AI has yet to achieve consciousness, yet alone 'free will' - something that has been debated for centuries, two sides to the argument, compatibilism and incompatibilism.

If the issue hasn't been resolved in humans....good luck with computers that possess neither consciousness or will, only mechanical function.

Are you using the argument from authority? John McCarthy says this , therefore it is so?



You could, but that begs the question of what we mean by "will". The dispute is over how to define "will" and "free will". That's what you need to focus on. That's why I keep telling you that your argument is irrelevant--because you seem to think that the physical substrate that mental processes depend on defines free will. It does not, and you do not mount any convincing argument that it does. That's what makes your entire argument a fallacy of irrelevance--specifically, a genetic fallacy.


The dispute is about how to define will and whether 'free will' is compatible with determinism....and that is precisely what I have been focusing on all along, in case it has somehow slipped your mind.

As a reminder, incompatibilism argues that the compatibilist definition of free will is flawed for the given reasons.

For example.
''Wanting to do X is fully determined by these prior causes. Now that the desire to do X is being felt, there are no other constraints that keep the person from doing what he wants, namely X. At this point, we should ascribe free will to all animals capable of experiencing desires (e.g., to eat, sleep, or mate). Yet, we don’t; and we tend not to judge non-human animals in moral terms.'' - cold comfort in compatibilism.


''An action’s production by a deterministic process, even when the agent satisfies the conditions on moral responsibility specified by compatibilists, presents no less of a challenge to basic-desert responsibility than does deterministic manipulation by other agents. '


freedom
1: the quality or state of being free: such as
a: the absence of necessity, coercion, or constraint in choice or action - Merrium Webster
 
One of those things is not like the others. The ball, the planets, and the moons, do not experience constraint. The dog experiences his chain as a constraint. For the dog, freedom is a meaningful concept, because the chain prevents him from chasing the squirrel, something that he really wants to do.

The common element is unimpeded action. Unimpeded action necessarily follows from necessitated will.

Again, you bury the meaningful distinction with a generalization. The notion of freedom requires the notion of constraint. The meaning of a specific freedom derives from the specific constraint.

I'm not trying to bury it. The argument is that the compatibilist definition of free will is not sufficient to prove the proposition.

For example:
1. We set the bird free (constraint: its cage).
2. We enjoy freedom of speech (constraint: censorship).
3. A woman was offering free samples in the grocery store (constraint: cost).
4. I participated in Libet's experiment of my own free will (constraint: undue influence).

Note that each of these freedoms have meaningful constraints, specifically related to that type of freedom.


Distinctions do matter. Of course they do.

Setting the bird free of its cage doesn't establish the bird's freedom of will. Freedom of speech, etc, doesn't establish freedom of will for the speaker. The ball bounces freely down the hillside. The bird dives and swoops freely through the air.... these are all actions that follow action production.

It is the nature of action production that is specific to the issue of freedom of will because it is specifically the means of action production that determines what action is action taken in a given instance in time.

The use of free in relation to action says nothing about the means, state or status of the activator of actions.

Therefore, to define free will as "freedom from causal necessity" is nonsense.

I don't define free will as freedom from causal necessity. I argue that the term free will is redundant. The term 'free will' tells us us nothing about human behaviour, means or drivers. That we have will, but it's not free will. It seems to me that the term 'free will' has become somewhat of an ideology, an aspiration.

To me, it just doesn't apply. Acting according to one's will is inevitable. We are evolved to act, and unless something prevents us from acting, we necessarily act according to our will.

''Wanting to do X is fully determined by these prior causes. Now that the desire to do X is being felt, there are no other constraints that keep the person from doing what he wants, namely X. At this point, we should ascribe free will to all animals capable of experiencing desires (e.g., to eat, sleep, or mate). Yet, we don’t; and we tend not to judge non-human animals in moral terms.''

What we want to do may be determined by prior causes, but what we will do is determined by our own choice. Joachim Krueger may have a PhD, but he does not seem to understand the notions of responsibility or justice. In this quote, he links desire directly to action, without the mediation of rational judgment. This is a serious error.

Please be a bit more careful in who you choose to quote. Or, be prepared to defend his words with your own.

Our choices are determined by mechanisms and processes not of our choosing, they are necessitated choices. Freedom is defined as 'freedom from necessity.'
We don't choose our condition, yet our condition forms our being, our mind, character, thoughts and actions.

Evolutionary Psychology;

''In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.

Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind). The physical organization of the brain evolved because that physical organization brought about certain information-processing relationships -- ones that were adaptive.

It is important to realize that our circuits weren't designed to solve just any old kind of problem. They were designed to solve adaptive problems''




Well, we actually do have some say in our condition. A person may choose to drop out of high school. That choice will change his future condition and thus impact other choices he makes down the road. We have each been active participants in all of the events that have affected us over the years. All of these choices, just like all other events, were causally necessary, of course. But this does not change the fact that we did in fact do the choosing. Nor does it prevent us from learning from our experience to make better decisions in the future.

Our inherent condition began long, long before we decide to drop out of high school. We don't get to choose our parents, genetic makeup, nation, state, society, culture, social conditions, economic status, physical or mental capacities, all of which make us what we are, how we think and in relation to our immediate circumstances, what we think.

That, after all, is the nature of determinism.
 
How brains organize is immaterial to the discussion of simplified systems for the sake of discussion in less complicated terms.

I can and do just as easily make my observations about simpler systems than neural ones, which STILL display process.

I don't really care about what religious beliefs you hold, or what justification on which you derive your failure to observe that ANY state machine is capable of "choice", and that this choice is either constrained from making valid choices of particular classes or "free": I have free will to choose which of many things I will pick up off my desk (I choose to pick up nothing); I do not have free will to choose which of many things to pick up off of your desk (it is not here, and thus I cannot pick stuff up off of it, nor would you let me were I present).

That you can, outside my observations, determine based on a calculation whether and which thing I pick up does not in any way drive what I will pick up. Were you to attempt to calculate which I would pick up off the desk within my observations, well, that's when things get complicated (mostly due to my contrary nature and desire to be strategically unassailable).

What is certain is that my decision will be mine, regardless. It does not matter that I am constructed; perhaps in some way both are true in that I exercise my free will in making decisions even while my will to determine what the actual process is, is in many ways constrained.

What is certain is that I CAN and DO discuss these things meaningfully. I cannot say the same for someone who thinks that they have no choice in the moment.
So choice is behavior. Define it precisely in material terms. Here's the definition. Choice: an act of selecting or making a decision when faced with two or more possibilities. Your job is to supply the materiality, the operations. My sense is you'll have trouble with 'choice', 'decision', and 'faced'. Oh yeah, you'll probably have problems operationalizing behavior a well. This exercise request is legit since we are discussing determinism. The point I'm making is self-reference and words not materially defined don't fit within determinism. You need to specify what is the material basis for a mind for instance. Otherwise I'll just continue my freelance irritations to your non-operable anchored tech exercises.
So the most basic form is the JNZ instruction:
Jump of not zero. One possibility is that the context, unknown of the core, contains zero, and the PC executes jump. One possibility is that the context contains "zero" and the PC executes an increment.

These are both real possibilities for the architecture to encounter. One will happen, one will not and this choice will be made on the basis of the contents of a register.

We have observed that the rules of the system will allow a differential behavior on a singular element.
Your reply made sense. It wasn't responsive, but, it makes logical sense. That there are two possibilities is only one criterion for choice making. The other is that the chooser understands both options. You are going to be hard-pressed, you actually state the circuit does not know, to demonstrate that a circuit construction is known to the circuit. As I see it a bit comes through and the circuit operates. If it has a context zero there will be one result if it has context "zero" there will be another result. It will do the same thing every time in the same context. Seems pretty deterministic to me. You still need to define choice operationally. "Unknown of the core" isn't an operational statement.
I bolded the part that indicates you are not really understanding what choice is in the context.

Nowhere do I demand this for choice. You have shoehorned "understanding" in as if that is necessary to choice. It is not.

Understanding is a requirement for "intelligence" or "intelligent choice", but not for choice in general.

I did in fact make a typo, if is zero or "not zero", but you are not the sort to give charity to opposing viewpoints for the sake of understanding, else we would not be here.

AS it stands, the bit does not "come through" it is "looked at". A event happens, and as a part of that event something changes so the circuit looks at more information before doing a thing.

that something does the same thing in the same context makes it deterministic. That something does different things in different contexts means that those contexts generate differential choice within the system. That the system's choice is blind or reasoned doesn't matter to the fact a choosing operation happened.

these choices can be massive or complex. What is important is the consistency of the state machine that generates displays choice behavior.
 
Evolutionary Psychology;

''In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.

Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind). The physical organization of the brain evolved because that physical organization brought about certain information-processing relationships -- ones that were adaptive.

It is important to realize that our circuits weren't designed to solve just any old kind of problem. They were designed to solve adaptive problems''

DBT, quick note: I found the article located here: https://www.cep.ucsb.edu/primer.html
Tracing the links backward to see how I got there:
 
DBT,

Before moving on to the rest about the brain and all, I want to focus on this. You write:

I’m saying that determinism doesn't allow alternate actions in any given instance in time. Compatibilists agree with acknowledge this, which is why the compatibilist definition is ''to act in accordance (unimpeded) to one's will'' and not the ability to have done otherwise.

This is not what I have been saying, or what Marvin has been saying. I can’t speak for all compatibilists, but what we have been pointing out, again and again, is that your formulation, “could not have done otherwise, is DIFFERENT FROM, “would not have done otherwise,” under identical circumstances.

As I have taken pains to point out, this is not just an idle semantic dispute. It is at the heart of the modal scope fallacy, wherein one confuses contingency (could have done otherwise) with necessity (could NOT have done otherwise).

This morning, I COULD HAVE had pancakes; but instead I had eggs. Given identical antecedent conditions, I WOULD HAVE eggs again; it does not follow that I COULD NOT have had pancakes.
 
Basically:

Principle 1. The brain is a physical system. It functions as a computer.

Why Your Brain Is Not a Computer

From the above:

According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

Bold mine.
 
The brain “is actively searching through alternative possibilities to test various options.”

Exactly.

Antecedent events, through a chain of cause and effect, deterministically present the brain with options; and the brain, actively searching through and testing these various options, determines which option is realized. This fact is not just dependent upon determinism, it REQUIRES it.

All of this searching through alternative possibilities to test various options, which evolved over billions of years and is extremely energy-consumptive, is according to you, simply an illusion. To me, the plausibility that evolution would select for illusions in this way is pretty much zero. In effect, what you are arguing is that all this testing, evaluating, deciding, etc., is nothing more than an inconsequential evolutionary spandrel. Pretty much this exemplifies the old saw, “extraordinary claims require extraordinary evidence.”
 
A rock rolling down a hill does not act upon its environment.

A brain testing and evaluating options, and then deciding what to do, acts upon its environment.
 
...
If you work in the field of artificial intelligence, you should know that free will is not a factor. That processing information and selecting an option according to sets of criteria has nothing to do with free will.

You really should be more cautious in making blanket statements about a field that you have no expertise in. The usefulness free will in robotics has long been an open question, and it is a popular topic in AI. Here is a well-known 1999 paper by AI pioneer, John McCarthy: FREE WILL-EVEN FOR ROBOTS

I can't access the page. Not that it matters. Unless there has been some miraculous breakthrough, AI has yet to achieve consciousness, yet alone 'free will' - something that has been debated for centuries, two sides to the argument, compatibilism and incompatibilism.

If the issue hasn't been resolved in humans....good luck with computers that possess neither consciousness or will, only mechanical function.

Are you using the argument from authority? John McCarthy says this , therefore it is so?

No, I'm using it as evidence that free will is a research topic in AI. In fact, it comes up a lot at conferences, because the overarching goal of AI is to replicate intelligent behavior in machines. It is of particular interest in the field of robotics, because robots have all the same problems that humans do in navigating in uncertain environments. They have to make the same kind of choices, and we model their behavior on human and animal behavior.

...
You could, but that begs the question of what we mean by "will". The dispute is over how to define "will" and "free will". That's what you need to focus on. That's why I keep telling you that your argument is irrelevant--because you seem to think that the physical substrate that mental processes depend on defines free will. It does not, and you do not mount any convincing argument that it does. That's what makes your entire argument a fallacy of irrelevance--specifically, a genetic fallacy.

The dispute is about how to define will and whether 'free will' is compatible with determinism....and that is precisely what I have been focusing on all along, in case it has somehow slipped your mind.

I think you believe that you have, but you don't show much evidence of understanding what definitions do or how they work. They don't actually prescribe how words ought to be used. They describe how words are used. So you need to focus on how English speakers actually use the expression to mean something, not how philosophers think it ought to mean something in the context of a deterministic universe. The philosophical discussion, not surprisingly, comes out of theological discussions concerning whether a god that knows the future can judge the actions of beings that don't know the future. Philosophers and theologicans have nothing to do with what expressions like "will" and "free will" mean.

As a reminder, incompatibilism argues that the compatibilist definition of free will is flawed for the given reasons.

There is no such thing as a "compatibilist definition of free will". If someone is telling you that there is, then I suggest that you put your hand on your wallet. The concept means the same thing for both incompatibilists and compatibilists. Definitions are fundamentally heuristic in nature. They describe some aspect of usage that helps people discover what the word means. Meaning is something entirely different from definition, and philosophers ought to argue primarily over meanings, not definitons. Otherwise, they are just engaging in a terminological dispute, not a substantive issue.

For example.
''Wanting to do X is fully determined by these prior causes. Now that the desire to do X is being felt, there are no other constraints that keep the person from doing what he wants, namely X. At this point, we should ascribe free will to all animals capable of experiencing desires (e.g., to eat, sleep, or mate). Yet, we don’t; and we tend not to judge non-human animals in moral terms.'' - cold comfort in compatibilism.

That is so wrong. Animals have free will as much as humans do. What they lack is a sense of moral obligation in human terms, not free will. I don't know if you've ever owned a dog, but you've probably heard the expressions "Bad dog!" and "Good dog!" It seems they are expected to know how to behave and to control their behavior. :)

''An action’s production by a deterministic process, even when the agent satisfies the conditions on moral responsibility specified by compatibilists, presents no less of a challenge to basic-desert responsibility than does deterministic manipulation by other agents. '

freedom
1: the quality or state of being free: such as
a: the absence of necessity, coercion, or constraint in choice or action - Merrium Webster

We've discussed Pereboom's Manipulation Argument in the past, and it has more to do with problems inherent in assigning moral responsibility than in actual free will. We judge the behavior of others because we are all expected to adhere to a moral code. However, that has more to do with moral philosophy than what it means to choose from a set of alternative acts of will. What does it mean to be responsible for one's actions? His article was very influential among philosophers, but it attracted as much criticism as praise. Although moral responsibility is often associated with free will, it doesn't actually define it. People may not always be held accountable for their actions, just as we don't hold animals accountable for theirs. Lacking a proper sense of moral responsibility does not mean that one lacks free will.
 
So choice is behavior. Define it precisely in material terms. Here's the definition. Choice: an act of selecting or making a decision when faced with two or more possibilities. Your job is to supply the materiality, the operations. My sense is you'll have trouble with 'choice', 'decision', and 'faced'. Oh yeah, you'll probably have problems operationalizing behavior a well. This exercise request is legit since we are discussing determinism. The point I'm making is self-reference and words not materially defined don't fit within determinism. You need to specify what is the material basis for a mind for instance. Otherwise I'll just continue my freelance irritations to your non-operable anchored tech exercises.
So the most basic form is the JNZ instruction:
Jump of not zero. One possibility is that the context, unknown of the core, contains zero, and the PC executes jump. One possibility is that the context contains "zero" and the PC executes an increment.

These are both real possibilities for the architecture to encounter. One will happen, one will not and this choice will be made on the basis of the contents of a register.

We have observed that the rules of the system will allow a differential behavior on a singular element.
Your reply made sense. It wasn't responsive, but, it makes logical sense. That there are two possibilities is only one criterion for choice making. The other is that the chooser understands both options. You are going to be hard-pressed, you actually state the circuit does not know, to demonstrate that a circuit construction is known to the circuit. As I see it a bit comes through and the circuit operates. If it has a context zero there will be one result if it has context "zero" there will be another result. It will do the same thing every time in the same context. Seems pretty deterministic to me. You still need to define choice operationally. "Unknown of the core" isn't an operational statement.
I bolded the part that indicates you are not really understanding what choice is in the context.

Nowhere do I demand this for choice. You have shoehorned "understanding" in as if that is necessary to choice. It is not.

Understanding is a requirement for "intelligence" or "intelligent choice", but not for choice in general.

I did in fact make a typo, if is zero or "not zero", but you are not the sort to give charity to opposing viewpoints for the sake of understanding, else we would not be here.

AS it stands, the bit does not "come through" it is "looked at". A event happens, and as a part of that event something changes so the circuit looks at more information before doing a thing.

that something does the same thing in the same context makes it deterministic. That something does different things in different contexts means that those contexts generate differential choice within the system. That the system's choice is blind or reasoned doesn't matter to the fact a choosing operation happened.

these choices can be massive or complex. What is important is the consistency of the state machine that generates displays choice behavior.
The bit either leads to the next event or it doesn't therefore, correcting myself, information comes through the logic system. Either zero or 'zero' are compared depending on which exists in the logic reference library.

Both must be available since the logic system operates in both cases depending on which context the logic library provides. Since the context can be "zero" the comparator must be very large or the system would lock-up. It probably would be simpler to provide two logic systems for the task. Setting that aside I'm going with what you describe.

And it comes back to 'choice' the definition of which I provided.

Choice: an act of selecting or making a decision when faced with two or more possibilities.

First I would argue the logic system isn't faced with a decision. It merely reacts one way depending on which context is provided. What looks like a choice isn't. There is no possibility of choice. There is only one possibility for each context. The fact that the logic can process both contexts is irrelevant since it is only processing one context at a time.

If both contexts were simultaneously available at the comparator I might be reacting differently but they aren't. With textual handwaving you are trying to make a point which you are not making.

That an element can provide either this or that data output is not an element processing both elements differentially. There is an either context A or context B operation.

A choice would be what a trained observer does when there isn't sufficient information to reliably distinguish between signal present or absent but one makes a choice regardless. Sufficient and insufficient information are available simultaneously. And even that is determined by a suite of existing conditions surrounding the decision space which can be resolved by a more sensitive detector.

I'd hate to think that we make choices simply by accessing wrong information (accessing inappropriate context).
 
  • Like
Reactions: DBT
The argument is that the compatibilist definition of free will is not sufficient to prove the proposition.

The compatibilist proposition is simply that free will is a meaningful concept within a deterministic world.

The proof is this:
P1: A freely chosen will is when someone chooses for themselves what they will do, while free of coercion and other forms of undue influence.
P2: A world is deterministic if every event is reliably caused by prior events.
P3: A freely chosen will is reliably caused by the person's own goals, reasons, or interests (with their prior causes).
P4: An unfree choice is reliably caused by coercion or undue influence (with their prior causes).
C: Therefore, the notion of a freely chosen will (and its opposite) is still meaningful within a fully deterministic world.

Distinctions do matter. Of course they do.
Setting the bird free of its cage doesn't establish the bird's freedom of will.

The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence).

Freedom of speech, etc, doesn't establish freedom of will for the speaker.

Freedom of speech is about speaking our mind without penalty. If we were penalized for criticizing the government, we would not have freedom of speech. Censorship is a meaningful and relevant constraint upon freedom of speech.

The ball bounces freely down the hillside.

And what would be some meaningful and relevant constraints to the ball's freely bouncing down the hillside? A wall. A boulder. A fallen tree. When you say that "the ball bounces freely down the hillside" you are saying that there were no constraints preventing it from doing so.

In the same fashion "free will", a freely chosen will, implies there were no meaningful or relevant constraints preventing the person from deciding for themselves what they would do.

The bird dives and swoops freely through the air.... these are all actions that follow action production.

Again, the bird swooping freely through the air implies that there were no meaningful or relevant constraints preventing her from doing so (like a hawk, or a glass window pane, or a cage).

It is the nature of action production that is specific to the issue of freedom of will because it is specifically the means of action production that determines what action is action taken in a given instance in time.

Correct. In the case of free will, the question is whether the action was produced by the person's own deliberate choice, or, whether the person was coerced or unduly influenced to do something that they would not otherwise do.

The use of free in relation to action says nothing about the means, state or status of the activator of actions.

The use of "free" in relation to an action implies the lack of any meaningful or relevant constraints preventing the action. For example, a freely chosen will implies the absence of coercion and other forms of undue influence, such that the person was free to decide for themselves what they would do.

I argue that the term free will is redundant. The term 'free will' tells us us nothing about human behaviour, means or drivers. That we have will, but it's not free will.

Free will tells us that the person's behavior was caused by their own deliberate choice, and that they were not forced to act that way by someone or something else. This information is critical when assessing a person's moral or legal responsibility for their behavior.

It seems to me that the term 'free will' has become somewhat of an ideology, an aspiration.

Nope. It's just a simple empirical distinction between the causes of a person's actions. Was the action deliberate, or was it coerced, or was it insane, or was it accidental, etc. It's a simple but important distinction.

To me, it just doesn't apply. Acting according to one's will is inevitable. We are evolved to act, and unless something prevents us from acting, we necessarily act according to our will.

Well, everything is always inevitable, so inevitability doesn't tell us anything useful. However, whether the person acted deliberately or whether they had a gun to their head, is critical information.

Our choices are determined by mechanisms and processes not of our choosing, they are necessitated choices.

All events are equally causally necessitated. So, that's not useful information. But whether someone made the choice themselves, or, the choice was imposed upon them against their will, is meaningful and relevant information.

Freedom is defined as 'freedom from necessity.'

But freedom is never defined as freedom from "causal necessity", because there ain't no such thing. All events are reliably caused by prior events, without exception, and without distinction. This includes all of our mental events.

Causal necessity is a different subject from practical necessity. Practical necessity is when we must do something whether we want to or not. Causal necessity incorporates all causes, including our wants and desires, within the total scheme of causation.

We don't choose our condition, yet our condition forms our being, our mind, character, thoughts and actions.

It is not necessary to cause ourselves in order for us to be the meaningful and relevant causes of other things. And if we are the meaningful and relevant cause of robbing a bank, then we will be held responsible, even though we have a history of prior causes stretching back to the Big Bang. No one is going to try to arrest the Big Bang.

Evolutionary Psychology;

''In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.

Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind). The physical organization of the brain evolved because that physical organization brought about certain information-processing relationships -- ones that were adaptive.

It is important to realize that our circuits weren't designed to solve just any old kind of problem. They were designed to solve adaptive problems''
Please note the portion I highlighted. There is no either/or between the brain and the mind. It is the same system whether we are speaking of mental operations, like reasoning, evaluating, and choosing or brain neural functions.

One of the interesting functions of the brain/mind, is the ability to symbolically communicate ideas through language. Note that there are no neural connections between the authors' brains and our own. Yet the words on the page physically alter our neural connections such that we understand what they are saying.

Well, we actually do have some say in our condition. A person may choose to drop out of high school. That choice will change his future condition and thus impact other choices he makes down the road. We have each been active participants in all of the events that have affected us over the years. All of these choices, just like all other events, were causally necessary, of course. But this does not change the fact that we did in fact do the choosing. Nor does it prevent us from learning from our experience to make better decisions in the future.

Our inherent condition began long, long before we decide to drop out of high school. We don't get to choose our parents, genetic makeup, nation, state, society, culture, social conditions, economic status, physical or mental capacities, all of which make us what we are, how we think and in relation to our immediate circumstances, what we think.

That, after all, is the nature of determinism.

Yes, and it was that same determinism that assured it would be that individual, personally, and no other object in the universe, that would choose to drop our of school.

Determinism does not change anything. Determinism itself never determines anything. It has no regulatory control. To believe that it is a causal agent that removes our freedom, our control, or our responsibility, is an illusion.
 
Choice: an act of selecting or making a decision when faced with two or more possibilities.

First I would argue the logic system isn't faced with a decision.
Then you are arguing nonsense. A decision here is just an event that goes one of "two or more possible ways" every time the same way in the same context, actually finding resolution

I do not accept your begged question in the second statement here. Thus we are at impasse.
 
The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.

It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
 
The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.

It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.

Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.

Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
 
The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.

It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.

Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.

Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.

I do see free will as existing on the scale of person, but I do not find the restriction of free will to things as grand only as persons a meaningful distinction.

The transistor serves in its truth just as easily.

As always, the system, in it's consistent response to consistent context and differential response to different context, creates "decision matrixes" on the system.

The existence of this decision matrix through time is the real important bit, but only pops into reality when the constraint pushes change of context.
 
The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.

It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.

Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.

Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.

I do see free will as existing on the scale of person, but I do not find the restriction of free will to things as grand only as persons a meaningful distinction.

The transistor serves in its truth just as easily.

As always, the system, in it's consistent response to consistent context and differential response to different context, creates "decision matrixes" on the system.

The existence of this decision matrix through time is the real important bit, but only pops into reality when the constraint pushes change of context.
You're doing what DBT was doing, burying the distinction within the generality, and losing significant meaning. The consequences of the kayak going over the dam are pretty dire for the guy in the kayak. The kayak, the dam, the water, and the atoms in the rocks in the water, on the other hand, could care less.

If the water flow is free to control where the kayak goes, then the kayaker dies. If the kayaker controls where it goes, then the kayaker lives. So, the kayaker is the only object that has an interest in the outcomes of this event. And that interest in the consequences is producing considerable action. So, the interest serves as a motivational cause of action.
 
The question is not whether the bird has free will or not. The question is what does "freedom" mean.

The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.

To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".

For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.

It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.

Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.

Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.

I do see free will as existing on the scale of person, but I do not find the restriction of free will to things as grand only as persons a meaningful distinction.

The transistor serves in its truth just as easily.

As always, the system, in it's consistent response to consistent context and differential response to different context, creates "decision matrixes" on the system.

The existence of this decision matrix through time is the real important bit, but only pops into reality when the constraint pushes change of context.
You're doing what DBT was doing, burying the distinction within the generality, and losing significant meaning. The consequences of the kayak going over the dam are pretty dire for the guy in the kayak. The kayak, the dam, the water, and the atoms in the rocks in the water, on the other hand, could care less.

If the water flow is free to control where the kayak goes, then the kayaker dies. If the kayaker controls where it goes, then the kayaker lives. So, the kayaker is the only object that has an interest in the outcomes of this event. And that interest in the consequences is producing considerable action. So, the interest serves as a motivational cause of action.
I think Jarhyn is closer to nailing the issue. It takes both opportunity and constraint to bound the possibility of choice. He only misses it with his circuit in that the bit lacks both options being available at the critical juncture. He, being the circuit god, constrains the availability of options by asserting singular fixed context. Now his example as he presents it is what I think most people believe constitutes a choice. As I've shown his model is clearly not a choice since it doesn't include constraint and opportunity. In fact, deterministic behavior is always limited by opportunity.

He's also correct in his assertion that most think that attributing multiple contexts to human action enables choice. It doesn't anymore than do several forces vectors pushing on a rock from different angles actually fail to impose multiple outcomes. The schemes we develop to justify the notion of choice are inventions outside the scope of empirical scientific law.
 
Back
Top Bottom