• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Compatibilism: What's that About?

Not according to neuroscience.

Neuroscience says it is my own brain that is making my own decisions, such as whether I will have the steak or the salad for dinner. Do you seriously disagree after posting all of those quotes from neuroscientists saying that it is in fact my own neural architecture that is producing my own decisions?

What Marvin chooses to write is determined by the information acquired by the brain interacting with the systems of the brain. The state and condition of the system in each incremental moment of processing makes the decision.

Exactly. And that's actually my brain interacting both within its own collection of specialized functional areas as well as with the external social and physical environments. For example, as I walk my body through the doorway into the restaurant, as I browse the restaurant menu, and as I recall what I had for breakfast and lunch today, and choose to have the salad instead of the steak, and tell the waiter "I will have the Chef Salad, please", and later when I pay the bill. That's all my own brain doing its thing.

Given the deterministic nature of the process, if regulation means a possible alternative, there is no regulation because there is no possible alternative.

And that is exactly what this thread is about: every event is always causally necessary from any prior point in time, but, what of it? Should this bother us in any way? Does this change anything at all in how we operate, the alternatives that we imagine, and the things that we can do, or the things that we will do?

Nothing changes at all! It is still us being us, doing what we choose to do. We walk into the restaurant, browse the menu, and choose the salad, even though we could have chosen the steak.

All of the events are exactly what they look like, one event reliably leading to the next. It's dinner time and we feel hungry. Several of us decide we will have dinner at a restaurant. We walk into the restaurant, browse the menu, and each of us decides what we will have for dinner. We each tell the waiter what we will have. The waiter tells the chef. The chef prepares the meals. The waiter brings us our meals and our bills. We eat our dinners, have our conversations, and when we're done we responsibly take our bills to the cashier and pay for our dinners.

Now, if we want, we can add the phrase, "It was causally necessary from any prior point in time that", at the start of every one of those sentences. Each is a separate event reliably caused by the prior event in the previous sentence. And each snippet from the chain is an inevitable event that would not have been otherwise.

The surprising fact is that one of the things that would not have been otherwise, was our logically necessary notions that "I can order the steak" and "I can order the salad" were both true. And if "I can order the steak" was true at that moment, then "I could have ordered the steak" will be true later. "I could have" is just the past tense of "I can". If any "I can" was ever true in the past, then its matching "I could have" will forever be true in the future. This is the logic of our language.

It was causally necessary from any prior point in time that "I would choose the salad", and it was also causally necessary from any prior point in time that "I could have chosen the steak". And these two facts could not have been otherwise, due to causal necessity.

Ironically, even the ability to do otherwise could not have been otherwise.

And that completes the understanding of why universal causal necessity/inevitability changes nothing.

Regulation in this instance is the non chosen state of the system.

The fact that the system makes choices was unchosen. Making choices is just one of those things that the system naturally does. It never had to choose to have the ability to make choices, it just found itself making them, frequently.

Freedom requires the possibility to do otherwise in any given circumstance. Determinism eliminates the possibility to do otherwise in any given circumstance.

Well, no, not in "any given circumstance", but only in the circumstances where you must choose between two or more options. And in those circumstances where you must make a choice, there will always be at least two options to choose from.

Determinism eliminates nothing. If it is causally necessary that you will make a choice, then you will definitely be making that choice, and you will definitely have at least two options to choose from, and you will definitely be the causal determinant of the thing that you will do, and you will definitely have at least one option that you didn't do, but could have done instead.

Freedom is incompatible with determinism.

Freedom requires reliable cause and effect. Determinism, which presumes perfectly reliable cause and effect, would be especially compatible with freedom. So, your claim of incompatibility is false.

Will is not the agency or initiator of thought or action (neural networks are), therefore will cannot be described as free will. We have will, not free will.

Free will is when we (our neural networks) choose for ourselves what we will do, while free of coercion and other forms of undue influence. Since we observe ourselves and others doing this all the time, it is silly to suggest that it doesn't happen.
Lots of imagination about what the brain does. Actually we don't know what the brain does manly because most of what the brain does is in various neural metabolic functions and transports which of course we can only guesstimate.

All we really have are indicators of metabolic activity in neural clusters and a few indices of various metabolic stabilization traced indices in neurons. We ain't close. Oh we guess and we hypothesize but we DON'T KNOW!!! All Marvins' wonderful it's-this-way statements are just Marvins individual proclamations.

That and a piece of bread with access to a toaster may or may not result in a burnt home.
 
  • Like
Reactions: DBT
static volatile State state(INITIAL);
RegisterStateLocation(&state);//unnecessary, but this allows external access to the state so that the while loop "makes sense"

while(1)
{
if (state.trylock())
{
switch(state.value())
{
case INITIAL: //"Will"
...
if (IsInitialDone()) state.set(FIGHT);
break;
...
}
state.unlock();
}
}

It was odd that the indentation disappeared when you posted this, but reappeared when embedded in quotes (makes it easier to match the { } pairs). I'm still waiting for a programming language that will group according to indentation, period.

Huh. And as soon as I post it, the indentation disappears.

Am I correct that the "break;" inside the case statement inside the switch statement exits the switch clause (skipping over any remaining case statements), bringing us to the state.unlock() function?

However, we're still in the while(1) so we continue to loop indefinitely?


Here, we can see that "state" encodes a "will", as in literally "the next time this comes around the switch will enter case initial.

The system has a "will".

Whether the will is free to operate depends on other geometries: maybe a failure case is entered.

There are a number more complexities but this is really what it all comes down to: systemic state retention.

I picture the "will" a bit differently. The program itself is Jarhyn's will for how the program operates, which is an example of a deliberate will. However the program will not run until the electricity is flowing, which is the driving force, similar to the biological drive to survive, thrive, and reproduce.
 
Lots of imagination about what the brain does. Actually we don't know what the brain does manly because most of what the brain does is in various neural metabolic functions and transports which of course we can only guesstimate.

Right. But our guesses are based on observations of behavioral changes when specific areas of the brain are damaged. We've known since Hippocrates that brain injuries affect behavior, emotions, and thinking.

All we really have are indicators of metabolic activity in neural clusters and a few indices of various metabolic stabilization traced indices in neurons. We ain't close. Oh we guess and we hypothesize but we DON'T KNOW!!! All Marvins' wonderful it's-this-way statements are just Marvins individual proclamations.

I'm just trying to keep up with DBT's reductionism, by pointing out that whatever we reduce it to, whether "neural architecture", or "state of the system", or "information processing", it is still the same brain in our own skulls that is deciding whether we will have the steak of the salad.

Free will is when we decide for ourselves whether to order the steak or the salad. It's really nothing complicated. It is an empirical observation of our behavior and the circumstances (whether we were free of coercion and undue influence or not).
 
Am I correct that the "break;" inside the case statement inside the switch statement exits the switch clause (skipping over any remaining case statements), bringing us to the state.unlock() function?

However, we're still in the while(1) so we continue to loop indefinitely?
Yes. Exactly right.

Volatile indicates to the system that some other process is mucking about with it (because otherwise the compiler can get cheeky and remove it saying "we don't write to this locally..." Not understanding that registering the address to the state can result in someone else somewhere else locking it and imposing a state change).

We assume, loosely, somewhere in the ... Contains a "state"

The operant reason for this is that I am trying to identify "will" on a slightly more fundamental level than "neurons" but rather "algorithmic requirements" for the property.

In short, if free will is real we shouldn't need neurons to show it, more static forms of algorithmic structure should contain the necessary geometry.

It can be really disgustingly hard to identify the algorithm of a neural network; they aren't assembled in the same way. So looking at Algorighms that are clearly well defined makes it a bit less confusing to look at.
 
Am I correct that the "break;" inside the case statement inside the switch statement exits the switch clause (skipping over any remaining case statements), bringing us to the state.unlock() function?

However, we're still in the while(1) so we continue to loop indefinitely?
Yes. Exactly right.

Volatile indicates to the system that some other process is mucking about with it (because otherwise the compiler can get cheeky and remove it saying "we don't write to this locally..." Not understanding that registering the address to the state can result in someone else somewhere else locking it and imposing a state change).

We assume, loosely, somewhere in the ... Contains a "state"

The operant reason for this is that I am trying to identify "will" on a slightly more fundamental level than "neurons" but rather "algorithmic requirements" for the property.

In short, if free will is real we shouldn't need neurons to show it, more static forms of algorithmic structure should contain the necessary geometry.

It can be really disgustingly hard to identify the algorithm of a neural network; they aren't assembled in the same way. So looking at Algorighms that are clearly well defined makes it a bit less confusing to look at.

After I retired I thought it would be fun to learn C and C++. I got through Kernigan's C, but only through chapter 4 of Stroustrup's C++, then got distracted by other things. I started out with Basic on a tape machine, then Cobol for the greater part of my life, then picked up VB but was frustrated when they changed everything to C++ style. So, it was a challenge left unmet. VB 3.0 was everything I ever wanted in a programming language. Today I use MS Access at home for keeping track of everything.

I guess I view will as a kind of driving force. Choosing redirects that force to perform one thing rather than another. The foundation of human will is our biological drives.

Hey, in your games, do you ever create an algorithm that creates algorithms?

Back on the Burroughs Medium System, they provided us with three tools that all created Cobol programs. RPL generated Cobol report programs from simple parameters. NDL generated a Cobol program that managed a computer communications network. And Disk Forte generated a Cobol program that managed a database.
 
Am I correct that the "break;" inside the case statement inside the switch statement exits the switch clause (skipping over any remaining case statements), bringing us to the state.unlock() function?

However, we're still in the while(1) so we continue to loop indefinitely?
Yes. Exactly right.

Volatile indicates to the system that some other process is mucking about with it (because otherwise the compiler can get cheeky and remove it saying "we don't write to this locally..." Not understanding that registering the address to the state can result in someone else somewhere else locking it and imposing a state change).

We assume, loosely, somewhere in the ... Contains a "state"

The operant reason for this is that I am trying to identify "will" on a slightly more fundamental level than "neurons" but rather "algorithmic requirements" for the property.

In short, if free will is real we shouldn't need neurons to show it, more static forms of algorithmic structure should contain the necessary geometry.

It can be really disgustingly hard to identify the algorithm of a neural network; they aren't assembled in the same way. So looking at Algorighms that are clearly well defined makes it a bit less confusing to look at.

After I retired I thought it would be fun to learn C and C++. I got through Kernigan's C, but only through chapter 4 of Stroustrup's C++, then got distracted by other things. I started out with Basic on a tape machine, then Cobol for the greater part of my life, then picked up VB but was frustrated when they changed everything to C++ style. So, it was a challenge left unmet. VB 3.0 was everything I ever wanted in a programming language. Today I use MS Access at home for keeping track of everything.

I guess I view will as a kind of driving force. Choosing redirects that force to perform one thing rather than another. The foundation of human will is our biological drives.

Hey, in your games, do you ever create an algorithm that creates algorithms?

Back on the Burroughs Medium System, they provided us with three tools that all created Cobol programs. RPL generated Cobol report programs from simple parameters. NDL generated a Cobol program that managed a computer communications network. And Disk Forte generated a Cobol program that managed a database.
I've thought a lot about creating Algorighm creation Algorighms.

One of the bigger problems is that I don't have the discretionary income to afford it.

It wouldn't be hard, per SE, to use this stupid little game to do it; it's just not something I have the time to do, nor the political connections to the creator so as to get source code.

It's more about access and available time. Also, it's not something I'm prepared to do in such a way as to lose control of my work?

You hear often that "they spent so long asking whether they could that they did not ask whether they SHOULD." Well I'm the guy that could and yes, I ask whether I should.

But I know someone else will, so I might as well make it happen on my terms.
 
Am I correct that the "break;" inside the case statement inside the switch statement exits the switch clause (skipping over any remaining case statements), bringing us to the state.unlock() function?

However, we're still in the while(1) so we continue to loop indefinitely?
Yes. Exactly right.

Volatile indicates to the system that some other process is mucking about with it (because otherwise the compiler can get cheeky and remove it saying "we don't write to this locally..." Not understanding that registering the address to the state can result in someone else somewhere else locking it and imposing a state change).

We assume, loosely, somewhere in the ... Contains a "state"

The operant reason for this is that I am trying to identify "will" on a slightly more fundamental level than "neurons" but rather "algorithmic requirements" for the property.

In short, if free will is real we shouldn't need neurons to show it, more static forms of algorithmic structure should contain the necessary geometry.

It can be really disgustingly hard to identify the algorithm of a neural network; they aren't assembled in the same way. So looking at Algorighms that are clearly well defined makes it a bit less confusing to look at.

After I retired I thought it would be fun to learn C and C++. I got through Kernigan's C, but only through chapter 4 of Stroustrup's C++, then got distracted by other things. I started out with Basic on a tape machine, then Cobol for the greater part of my life, then picked up VB but was frustrated when they changed everything to C++ style. So, it was a challenge left unmet. VB 3.0 was everything I ever wanted in a programming language. Today I use MS Access at home for keeping track of everything.

I guess I view will as a kind of driving force. Choosing redirects that force to perform one thing rather than another. The foundation of human will is our biological drives.

Hey, in your games, do you ever create an algorithm that creates algorithms?

Back on the Burroughs Medium System, they provided us with three tools that all created Cobol programs. RPL generated Cobol report programs from simple parameters. NDL generated a Cobol program that managed a computer communications network. And Disk Forte generated a Cobol program that managed a database.
I've thought a lot about creating Algorighm creation Algorighms.

One of the bigger problems is that I don't have the discretionary income to afford it.

It wouldn't be hard, per SE, to use this stupid little game to do it; it's just not something I have the time to do, nor the political connections to the creator so as to get source code.

It's more about access and available time. Also, it's not something I'm prepared to do in such a way as to lose control of my work?

You hear often that "they spent so long asking whether they could that they did not ask whether they SHOULD." Well I'm the guy that could and yes, I ask whether I should.

But I know someone else will, so I might as well make it happen on my terms.
It's probably a big challenge anyway. A compiler converts a higher level language to machine language, so it is also a program that writes another program. And SQL is a higher symbolic language that gets converted into a database access program. In Microsoft Access they have a form that let's you select the data elements from a table, and specify how to group items, and set selection criteria, and then it generates the SQL for you. Have you ever used Access?
 
Not according to neuroscience.

Neuroscience says it is my own brain that is making my own decisions, such as whether I will have the steak or the salad for dinner. Do you seriously disagree after posting all of those quotes from neuroscientists saying that it is in fact my own neural architecture that is producing my own decisions?

It's not enough to say that is our own brain that it is doing it if we have no access or control over what the brain is doing, or its state and condition.

Without control of state and condition, function or information processing, free will plays no part.

If free will plays no part in state, condition, information processing or motor action, it's false to claim that we have free will.

Free will is just an ideology, a label.

This article does not argue for determinism but gives an argument against Compatibilism;
1. If causal determinism is true, all events are necessitated
2. If all events are necessitated, then there are no powers
3. Free will consists in the exercise of an agent’s powers

Therefore, if causal determinism is true, there is no free will; which is to say that free will is incompatible with determinism, so compatibilism is false.''
 
You wasted your time on an explanation that goes against the very thing you are arguing for
I am not arguing that the universe is not deterministic.

Rather I am arguing that all actors within all systems, regardless of whether they are "deterministic" have "free will" so long as the system holds state. For instance, "1+n" holds no state as function.

"1+n+(previous n)" is a state function of will on (previous N state).

It's more a function of the existence of a state machine. You're the one arguing what relationships state machines can and cannot have of themselves, and embarrassing yourself because you don't seem to understand state machines, or stateliness of systems in general.


You are simply asserting free will. The system doesn't operate on the principle of free will. A tree grows and responds to its environment, signaling, adapting, responding, etc, without will or consciousness, just its own makeup.

Functionality is not free will. Acting according to one's makeup and nature is inevitable so does not equate to 'free will' because nothing is willed. The system functions as it has evolved to do.
The problem here is that you seem to get easily confused when more than one complicated thing must be observed interacting.

The basic or fundamental definition of freedom
Using a dictionary to NEWSPEAK the actual functional definition of freedom away because you wish to engage in a philosophical straw-man argument again?

YAWN!

I showed you a REAL system where a REAL property "will" is held by a REAL actor, and that REAL property of will has an observable geometry such that it can be calculated whether that will is free, marking it as objectively REAL.

The only reason that the will cannot in this example be directly controlled by the actor is that the actor is missing (but not necessarily; just by circumstance) a choice function which chooses the will.

And it is trivial insofar as self-review is lacking on account of the core function not having state variable handles into the various choice functions.

Neural systems allow back-prop and training such that when the choice function executes and the result fails the pre-check, the choice function gets modified.

I know this because I'm the one whose activity is narrated generally through subvocalization, and the thing that is narrated is exactly that activity: the output of the thing was not "like" any sort of output that would get me to my goal. You could even say I "dislike" the output, and, just like I keep rejecting YOUR bullshit, I train the system until it produces output that, when I route it through that other set of neurons, that set does not complain.

Then I actually DO the thing and observe the result. Another more automatic set of neurons looks at whether the goal of the behavior was "satisfied", and if it was, gold star, and if it was not... Reject that output and back to back-propagation.

At any rate we have trivially proved that an entity can hold a "will", and that the will can objectively be constrained or free. At this point it is you who must prove that neurons are incapable of this basic form of simple algorithmic structure.


Your brain is generating not only your 'subvocalizations.' it is generating your you; your sense of self identity, self awareness, thoughts, feelings and actions.

Whatever you do, believe, think or feel, the underlying unconscious information processing activity of the brain is doing it without your awareness as a conscious being.

"The illusion of" conscious agency or 'free will' is exposed when something goes wrong with the system. Consciousness emerges from information processing; it cannot access it or control the means of production: the activity of neural networks/the brain as a modular system.

If the mechanisms fail, consciousness suffers the consequences. Free will is an illusion.
Obviously, it IS doing it with my awareness as a conscious being because I am aware of it, as a conscious being.

My brain as Marvin points out so many times IS me, or at least a part of my brain is.

It's entirely possible to address the part of the brain, specifically, that implements a mutable algorithm of task evaluation.

I don't even need to have consciousness to have free will, though, which I demonstrated with my example of a deterministic universe with active free will.

Consciousness is entirely separate from the process of free will.

Consciousness isn't even required for moral culpability.

All that is required for ethical culpability is "object ownership of cause" which is to say "this object caused this outcome as a production of itself through prior causes; prior causes would not cause this if not mediated by this formed object."

Or in other words "it was the state of this state machine that needs to be altered lest the state it is in lead to bad things"

Or "it was that machines will that caused this presently so we need to modify it's will".

Or "Tom is a murdering bastard, let's throw him in the pokey until he learns killing people is wrong."

This has been addressed, over and over, time and time again.

images


Quote:
''The consequence argument can be viewed as part of a more general incompatibilist argument. This standard incompatibilist argument can be stated as follows (see Kane, 2002):

(1) The existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely.

(2) Determinism is not compatible with alternative possibilities (it precludes the power to do otherwise).

(3) Therefore, determinism is not compatible with acting freely.

The consequence argument can be seen as a defense of premise (2), the crucial premise, since it maintains that, if determinism is true, the future is not open but is rather the consequence of the past (going back before we were born) and the laws of nature.''

Jarhyn - ''A deterministic system is a system in which no randomness is involved in the development of future states of the system.''

Bye, bye any notion of compatibilist free will, Jarhyn.

Protest as much as you like, freedom of will is not compatible with determinism.
 
It's going around in circles as it is.

To avoid going around in circles, you must actually engage with the ideas that threaten your own.


There is no real threat. There are two sides of the debate, compatibilism and incompatibilism.

You say that I am not engaging with compatibilist ideas, I say that I have been addressing the issues and flaws in compatibilism all along, providing arguments, valid objections and evidence against free will agency from neuroscience, case studies, agency, etc, and that it is the compatibilists who dismiss information that is contrary to their position, and are not engaging with incompatibilist concerns and counter arguments.

This site includes theism, but it is another way of putting the issue of compatibilty

''Let's assume there is some fact X that is not up to us, and X entails some fact Y, but Y is up to us.

we could restate this as:
Some fact Y is up to us, X entails Y, but X is not up to us.

which means:
We can render some fact Y false, X entails Y, but we cannot render X false.
This is to say that X entails Y, and Y is false while X is true, which is a contradiction.

Here is an example to illustrate the fact that you are wearing a shirt entails the fact that you are not naked. If premise 3 is false, it would lead to situations where you could be naked but not shirtless (for example). This is incoherent. If the fact that you are wearing a shirt entails that you are not naked, then you cannot be both naked and wearing a shirt.

If premise 3 is false, it leads to a contradiction. Therefore, Premise 3 cannot be false.

And hence, compatibilism is false.'
 
This has been addressed, over and over, time and time again
No, you have spouted religious garbage over and over time and again. Your religion did not become any less wrong or any less burdened.


The existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely.
This is FALSE the way you write it. We do not agree on this premise therefore it is invalid to use in an argument until you support it.

The existence of alternate possibilities on the basis of local knowledge is required for deciding on one's will freely, but the will need not be put there freely to itself be "free". In humans it is, in dwarves it is not, but the latter still have free wills and constrained ones

In fact, my dwarf doesn't even need to have two options. They just need to have something that they are doing. I pointed out exactly which wills in my deterministic universe were free.

Now, when it comes time to determine the next thing to do there are many options which a choice function reduces of.

You will need to actually go into the example if you wish to invalidate that.

I have shown a free will that required no second option, did not require conscious control of will, did not require self-review or even intelligence.

It required a state and a possible failure condition of another Algorighm whose reality was beyond a horizon of the state machine's internal model of external state.

All of your other failures of logic spread from there.
 
It's not enough to say that is our own brain that it is doing it if we have no access or control over what the brain is doing, or its state and condition.

How does the brain step outside of itself in order to access and control itself? You keep insisting upon freedoms that are both physically and logically impossible!

Our brain itself, by its own neural architecture, performs information processing that includes decision making. In doing so, it exercises control over the body's actions and the mind's choices. For example, in the restaurant, it processes the menu, recalls what we had for breakfast and lunch, and decides we should have a salad for dinner instead of the steak. It voices its chosen will to the waiter, "I will have the Chef Salad, please".

And because it happens to be our own brain, we are responsible for paying the bill for our dinner.

Without control of state and condition, function or information processing, free will plays no part.

Free will is not an external actor exerting control over the brain! Free will is the brain itself, choosing what the body will order for dinner, while free of coercion and undue influence.

Free will is just an ideology, a label.

It's not an ideology. Free will is what we call the event where someone decides for themselves what they will do. The "free" in free will means the choice was made while free of coercion and other forms of undue influence.

This article does not argue for determinism but gives an argument against Compatibilism;
1. If causal determinism is true, all events are necessitated
2. If all events are necessitated, then there are no powers
3. Free will consists in the exercise of an agent’s powers

Therefore, if causal determinism is true, there is no free will; which is to say that free will is incompatible with determinism, so compatibilism is false.''

The problem is the paradox in the second premise, "2. If all events are necessitated, then there are no powers." If there are no powers, then there is no power to necessitate! Causal necessity requires that each event has the power to necessitate the next event in the causal chain. If the prior event lacks that power, then the subsequent will not occur, and the chain collapses.

So, I think I've solved this riddle without reading the article. But if you find something in the article that justifies re-evaluation of my criticism, please advise.
 
There are two sides of the debate, compatibilism and incompatibilism.

Okay.

You say that I am not engaging with compatibilist ideas, I say that I have been addressing the issues and flaws in compatibilism all along, providing arguments, valid objections and evidence against free will agency from neuroscience, case studies, agency, etc, and that it is the compatibilists who dismiss information that is contrary to their position, and are not engaging with incompatibilist concerns and counter arguments.

I'm sure that many compatibilists have made many arguments over the years. Philosophy tends to generate a lot of different arguments, some strong, some weak. My argument is simple and straightforward:

P1: A freely chosen will is when someone chooses for themselves what they will do, while free of coercion and other forms of undue influence.
P2: A world is deterministic if every event is reliably caused by prior events.
P3: A freely chosen will is reliably caused by the person's own goals, reasons, or interests (with their prior causes).
P4: An unfree choice is reliably caused by coercion or undue influence (with their prior causes).
C: Therefore, the notion of a freely chosen will (and its opposite) is still meaningful within a fully deterministic world.

And I have answered all of your objections from neuroscience repeatedly.

Your arguments seem to involve a different notion of free will, one that involves some kind of magical "freedom" that we both recognize as being impossibilities, such as freedom from causal necessity, or freedom from determinism, or freedom from ones own brain, or freedom from the physical world, or some other imaginary freedom that you think free will ought to entail, even though you admit and argue that they are impossible.

The notion of free will that I am defending is the simple empirical event in which a person decides for themselves what they will do, while free of coercion and undue influence. And I have defended my definition with three dictionaries, common sense, and with your own neuroscience that bears witness to the fact that we do in fact make decisions for ourselves as to what we will do.

''Let's assume there is some fact X that is not up to us, and X entails some fact Y, but Y is up to us.
we could restate this as:
Some fact Y is up to us, X entails Y, but X is not up to us.
which means: We can render some fact Y false, X entails Y, but we cannot render X false.
This is to say that X entails Y, and Y is false while X is true, which is a contradiction. "

Another forking riddle? The solution is that we are X and we can choose to render Y true or false.

This is just another version of the "prior causes paradox". If we have prior causes, then are we the "true" causes of anything? Aren't those prior causes the "true" causes? Well, no. Because none of those prior causes can pass that same test. All prior causes have prior causes, so no prior cause would qualify as a "true" cause. The causal chain would disintegrate because there would be no "true" causes anywhere in the chain.
 
we have no access or control over what the brain is doing, or its state and condition
You might have no access or control over it's state and condition, perhaps, if you are literally a rock, or perhaps a dwarf.

Whether we have access to the state and condition is immaterial to whether that state encodes will, and whether that will is free.

We do have access to it's state, both to read and to write to said state. Neural networks in most living things are massively recursive, and they can absolutely recurse into the mechanisms that control back-propagation.

I mean fuck, I could set up a dwarf to manage a correction to a selection based on return and observed or calculated error.

This can be done in NAIVE Algorighms without the absolute mutability that neural networks bring to the table.

I access and observe my own mind's state quite readily. I hear research that, as crazy as it may sound, some people don't have that access?

Like, they can't play music inside their own heads, or watch a scene from a movie, or draw lines or turn shapes. Some people just are blind to the inside of their own mind.
 
There are two sides of the debate, compatibilism and incompatibilism.

Okay.

You say that I am not engaging with compatibilist ideas, I say that I have been addressing the issues and flaws in compatibilism all along, providing arguments, valid objections and evidence against free will agency from neuroscience, case studies, agency, etc, and that it is the compatibilists who dismiss information that is contrary to their position, and are not engaging with incompatibilist concerns and counter arguments.

I'm sure that many compatibilists have made many arguments over the years. Philosophy tends to generate a lot of different arguments, some strong, some weak. My argument is simple and straightforward:

P1: A freely chosen will is when someone chooses for themselves what they will do, while free of coercion and other forms of undue influence.
P2: A world is deterministic if every event is reliably caused by prior events.
P3: A freely chosen will is reliably caused by the person's own goals, reasons, or interests (with their prior causes).
P4: An unfree choice is reliably caused by coercion or undue influence (with their prior causes).
C: Therefore, the notion of a freely chosen will (and its opposite) is still meaningful within a fully deterministic world.

And I have answered all of your objections from neuroscience repeatedly.

Your arguments seem to involve a different notion of free will, one that involves some kind of magical "freedom" that we both recognize as being impossibilities, such as freedom from causal necessity, or freedom from determinism, or freedom from ones own brain, or freedom from the physical world, or some other imaginary freedom that you think free will ought to entail, even though you admit and argue that they are impossible.

The notion of free will that I am defending is the simple empirical event in which a person decides for themselves what they will do, while free of coercion and undue influence. And I have defended my definition with three dictionaries, common sense, and with your own neuroscience that bears witness to the fact that we do in fact make decisions for ourselves as to what we will do.

''Let's assume there is some fact X that is not up to us, and X entails some fact Y, but Y is up to us.
we could restate this as:
Some fact Y is up to us, X entails Y, but X is not up to us.
which means: We can render some fact Y false, X entails Y, but we cannot render X false.
This is to say that X entails Y, and Y is false while X is true, which is a contradiction. "

Another forking riddle? The solution is that we are X and we can choose to render Y true or false.

This is just another version of the "prior causes paradox". If we have prior causes, then are we the "true" causes of anything? Aren't those prior causes the "true" causes? Well, no. Because none of those prior causes can pass that same test. All prior causes have prior causes, so no prior cause would qualify as a "true" cause. The causal chain would disintegrate because there would be no "true" causes anywhere in the chain.
His objections from neuroscience don't even make sense!

Neural clusters can form arbitrary algorithmic expression within the bounds of their limit of complexity, which is really a function of how many neurons you have available.

This is what is important to realize.

Can you write an algorithm that modifies its own state?

Absolutely!

Thus you can arrange neurons such that the set of neurons themselves manage their own state as a course of their normal behavior.
 
I access and observe my own mind's state quite readily. I hear research that, as crazy as it may sound, some people don't have that access?

Like, they can't play music inside their own heads, or watch a scene from a movie, or draw lines or turn shapes. Some people just are blind to the inside of their own mind.

I'm reading Mark Solms' book, "The Hidden Spring: A Journey to the Source of Consciousness". He has studied children with a genetic disease which results in a totally absent cortex. But they still respond emotionally to stimulus, like one little girl who reacted with joy when her baby brother was laid on her chest. He goes over several other variations of patients missing specific cortex areas including the blind who cannot see but can avoid obstacles even though never conscious that they are there. It's a cool book if you can get past his use of "learnt" rather than "learned".

In Michael Graziano's book, "Consciousness and the Social Brain", he describes the hemi-spatial neglect syndrome, where the patient is unaware of objects on one side of the room, but unaware of his unawareness, so he never senses that he is missing anything. If you toss a ball at him from that side he will automatically bat it away, but doesn't know why.
 
This has been addressed, over and over, time and time again
No, you have spouted religious garbage over and over time and again. Your religion did not become any less wrong or any less burdened.

What is clear is that you don't understand the debate between compatibilism and incompatibilism or the implications of determinism, which is why you keep introducing extraneous elements like stochastic processes, random or probabilistic events when the issue is compatibility in relation to determinism.

You have done this repeatedly despite the fallacy being pointed out over and over.


The existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely.
This is FALSE the way you write it. We do not agree on this premise therefore it is invalid to use in an argument until you support it.

The existence of alternate possibilities on the basis of local knowledge is required for deciding on one's will freely, but the will need not be put there freely to itself be "free". In humans it is, in dwarves it is not, but the latter still have free wills and constrained ones

In fact, my dwarf doesn't even need to have two options. They just need to have something that they are doing. I pointed out exactly which wills in my deterministic universe were free.

Now, when it comes time to determine the next thing to do there are many options which a choice function reduces of.

You say that as if you are consciously doing the determining, that you can consciously determine what is going on in your brain.

If that is what you mean, you don't understand the research in neuroscience.
 
There are two sides of the debate, compatibilism and incompatibilism.

Okay.

You say that I am not engaging with compatibilist ideas, I say that I have been addressing the issues and flaws in compatibilism all along, providing arguments, valid objections and evidence against free will agency from neuroscience, case studies, agency, etc, and that it is the compatibilists who dismiss information that is contrary to their position, and are not engaging with incompatibilist concerns and counter arguments.

I'm sure that many compatibilists have made many arguments over the years. Philosophy tends to generate a lot of different arguments, some strong, some weak. My argument is simple and straightforward:

P1: A freely chosen will is when someone chooses for themselves what they will do, while free of coercion and other forms of undue influence.
P2: A world is deterministic if every event is reliably caused by prior events.
P3: A freely chosen will is reliably caused by the person's own goals, reasons, or interests (with their prior causes).
P4: An unfree choice is reliably caused by coercion or undue influence (with their prior causes).
C: Therefore, the notion of a freely chosen will (and its opposite) is still meaningful within a fully deterministic world.

To keep this brief and to the point;

I'm familiar with your argument. As I have pointed out, the argument is sound, the conclusion follows from the premises, but as the premises are flawed, the conclusion is not valid.

The major flaw in the argument is P1.

Why? Well, as pointed out, because it does not take the unconscious neuronal nature of decision making into account.

Namely, if a decision is determined unconsciously and no alternate action is possible (determinism), that it's the deterministic incremental states of neural networks unfolding over time that determines the decision and related action, the decision was not freely chosen. Rather than freely chosen, it was determined.

Therefore, rather than being than a freely willed decision, it is a determined decision followed by a determined action (freely performed/necessarily performed). Consequently, it is false to label determined decisions and related actions as freely willed.

Rather than [freely] willed, they are determined or necessitated.

Inner necessitation is no more an instance of free will than external coersion -

The distinction is that you either act according to your necessitated will (necessarily) or you are being forced against you necessitated will.

At no point is will free of necessitation.

As necessitation or determinism is the antithesis of freedom, free will is not compatible with determinism.

It is not enough to assert: it is the brain/us that is doing it, therefore free will. How it is done is the issue.

Which comes down to: Determinism makes it impossible for us to “cause and control our actions in the right kind of way.''
 
Why? Well, as pointed out, because it does not take the unconscious neuronal nature of decision making into account.
I pointed out quite rightly that free will does not require consciousness.

Namely, if a decision is determined unconsciously and no alternate action is possible (determinism), that it's the deterministic incremental states of neural networks unfolding over time that determines the decision and related action, the decision was not freely chosen. Rather than freely chosen, it was determined.
Which does nothing to change there fact that it was their neural state and nobody elses, that caused this thing to happen.

You are WRONG in your assumption that deliberation does not ever happen deliberately.

You assertion makes such wild, and unstated, claims as "algorithms cannot have internal control of state."

Further, it would rely on a second wild and unstated claim that "neurons may not generally implement Algorithms."
At no point is will free of necessitation.
Nobody ever said it was not free of necessitation.

If certain things were not inevitable from particular antecedents, I would never have been able to get the dwarf's will to be "Throw Tantrum"; "cause trouble".

What was necessary was that dwarf having that will and being free to act on it.

Try to wave your hands begging for absolution all you want, but you will never get it.
 
P1: A freely chosen will is when someone chooses for themselves what they will do, while free of coercion and other forms of undue influence.
P2: A world is deterministic if every event is reliably caused by prior events.
P3: A freely chosen will is reliably caused by the person's own goals, reasons, or interests (with their prior causes).
P4: An unfree choice is reliably caused by coercion or undue influence (with their prior causes).
C: Therefore, the notion of a freely chosen will (and its opposite) is still meaningful within a fully deterministic world.

Namely, if a decision is determined unconsciously and no alternate action is possible (determinism), that it's the deterministic incremental states of neural networks unfolding over time that determines the decision and related action, the decision was not freely chosen. Rather than freely chosen, it was determined.

To keep this brief and to the point, you are defining free will as a choice that is "free of causal necessity" (determinism). I am defining free will as a deterministic choice that is "free of coercion and undue influence".

We both agree that there is no such thing as freedom from causal necessity. All events unfold over time as the reliable result of prior events. Because we agree that "freedom from causal necessity" cannot exist, I am justified in questioning its use as the definition of free will.

"Free will" has another definition, one that is real, one that is meaningful and relevant, and one that everyone commonly understands. Free will is when we decide for ourselves what we will do, while free of coercion and undue influence. It is the notion of a voluntary or deliberate act. An action that we chose to do, rather than an action we were forced to do. And it is this notion that is actually used when assigning moral or legal responsibility for a person's behavior.

Because we agree that "freedom from causal necessity" is an irrational notion, one that is never used by anyone when assigning responsibility, it must be rejected. Anyone advocating for such a definition should be suspect of desiring to undermine the notions of moral and legal responsibility.

Therefore, rather than being than a freely willed decision, it is a determined decision followed by a determined action (freely performed/necessarily performed). Consequently, it is false to label determined decisions and related actions as freely willed.

So, you are really in deep with the notion that "freely" must imply "freedom from causal necessity", something which you claim cannot possibly exist. Are we correct to assume that your intention is to undermine the notions of moral and legal responsibility?

Rather than [freely] willed, they are determined or necessitated.

If you use a rational definition of "freely", one that is limited to things that one might actually be free of, like, free from slavery, free from handcuffs, free from jail, free to speak my mind, free from coercion, free from mental illness, free from hypnosis, etc., then you can preserve the notion of freedom. But if you insist that "freely" must include "freedom from causal necessity" then all of those freedoms disappear, because none of them can claim to be free of reliable causation.

So, by choosing to require "freedom from causal necessity" in your notion of "freedom" one may also assume that your intention is to wipe out the notion of freedom itself. Is that your intent?

Inner necessitation is no more an instance of free will than external coersion -

So, it continues to appear that you do in fact intend to wipe out moral and legal responsibility from human understanding. Inner necessitation includes us choosing for ourselves what we will do. External coercion is a guy with a gun forcing us to subjugate our will to his. If you fail to make any distinction between these two events, then you have lost any moral grounding.

The distinction is that you either act according to your necessitated will (necessarily) or you are being forced against you necessitated will.

Yes. That's better (P3 and P4). All events, without exception, are causally necessitated by prior events. However, the event in which you chose for yourself what you would do is commonly known as a voluntary or deliberate act that you were free to choose to do yourselves, and the event in which the guy with the gun forced you to do his will is one in which you were not free to choose for yourself what you would do.

At no point is will free of necessitation.

AT NO POINT IS ANY EVENT EVER "free of causal necessitation" (also known as good old fashioned "reliable cause and effect").

As necessitation or determinism is the antithesis of freedom,

Then we should find the incompatibilists lobbying to remove "free" and "freedom" from all of our dictionaries, right? No event is ever free of reliable causation, because without reliable causation we would have no ability to do anything at all.

Thus, "freedom from causal necessity" is a paradoxical, self-contradicting, and a false notion.

It is not enough to assert: it is the brain/us that is doing it, therefore free will. How it is done is the issue.

How is it done? The brain is doing it through a deterministic series of unconscious and conscious processes that perform many functions, including deciding for us, when presented with multiple possibilities (for example the restaurant menu), which option we will choose (for example, the steak or the salad). At least, that's what the neuroscientists are consistently telling us.

They are avoiding the term "free will", because it carries a lot of baggage, but if we clean it up, like I've done, it once again becomes a meaningful and relevant term. It is when we decide for ourselves what we will do, while free of coercion and undue influence. Nothing more. Nothing less.

Which comes down to: Determinism makes it impossible for us to “cause and control our actions in the right kind of way.''

The customer in the restaurant chooses to tell the waiter, "I will have the Chef Salad, please". The waiter brings him the dinner and the bill. This is sufficient proof that we can "cause and control our actions in the right kind of way". You've shown no examples, or any other evidence, that contradicts what we have seen with our own eyes.
 
Back
Top Bottom