The AntiChris
Senior Member
All of us on the compatibilist side have been arguing that free will is incompatible with causal necessity,
I think you meant 'compatible' here, not "incompatible".
All of us on the compatibilist side have been arguing that free will is incompatible with causal necessity,
DBT,
I’ve asked this before, but I don’t believe you addressed this.
Where did brains come from? What are they good for?
You believe that our impression that we can choose among competing alternatives is an illusion, and that a person really has no more choice than a rock rolling down a hill. Is that not right?
If so, you owe an explanation of where the alleged illusion of free choice comes from.
If your hard determinism is correct, I have argued that brains, minds, and the illusory impression of free choice, would be useless. They would have no survival value to them and hence there would be no selective pressures to evolve such organs and abilities.
But we have such organs. Why? Not all of evolution is driven by natural selection. Some results are pure accident (genetic drift) and some phenotypic outcomes are accidental consequences (spandrels) of other selection-driven outcomes. Clearly neither is the case for complex brains. Brains, minds, and the ability to make choices were cumulatively selected for by untold generations of descent with modification.
But why? In your world, brains do nothing for us. But in reality, complex brains are incredibly energy-intensive. They are expensive to make and maintain. Our big brains are what makes human birth perilous for the mother and why humans must be cared for by their parents for a very long time, much longer than most other species.
...It only bothers me because I have worked professionally in the field of artificial intelligence, and this kind of overimplification is wrong on so many levels.
If you work in the field of artificial intelligence, you should know that free will is not a factor. That processing information and selecting an option according to sets of criteria has nothing to do with free will.
You really should be more cautious in making blanket statements about a field that you have no expertise in. The usefulness free will in robotics has long been an open question, and it is a popular topic in AI. Here is a well-known 1999 paper by AI pioneer, John McCarthy: FREE WILL-EVEN FOR ROBOTS
You could, but that begs the question of what we mean by "will". The dispute is over how to define "will" and "free will". That's what you need to focus on. That's why I keep telling you that your argument is irrelevant--because you seem to think that the physical substrate that mental processes depend on defines free will. It does not, and you do not mount any convincing argument that it does. That's what makes your entire argument a fallacy of irrelevance--specifically, a genetic fallacy.
One of those things is not like the others. The ball, the planets, and the moons, do not experience constraint. The dog experiences his chain as a constraint. For the dog, freedom is a meaningful concept, because the chain prevents him from chasing the squirrel, something that he really wants to do.
The common element is unimpeded action. Unimpeded action necessarily follows from necessitated will.
Again, you bury the meaningful distinction with a generalization. The notion of freedom requires the notion of constraint. The meaning of a specific freedom derives from the specific constraint.
For example:
1. We set the bird free (constraint: its cage).
2. We enjoy freedom of speech (constraint: censorship).
3. A woman was offering free samples in the grocery store (constraint: cost).
4. I participated in Libet's experiment of my own free will (constraint: undue influence).
Note that each of these freedoms have meaningful constraints, specifically related to that type of freedom.
Therefore, to define free will as "freedom from causal necessity" is nonsense.
''Wanting to do X is fully determined by these prior causes. Now that the desire to do X is being felt, there are no other constraints that keep the person from doing what he wants, namely X. At this point, we should ascribe free will to all animals capable of experiencing desires (e.g., to eat, sleep, or mate). Yet, we don’t; and we tend not to judge non-human animals in moral terms.''
What we want to do may be determined by prior causes, but what we will do is determined by our own choice. Joachim Krueger may have a PhD, but he does not seem to understand the notions of responsibility or justice. In this quote, he links desire directly to action, without the mediation of rational judgment. This is a serious error.
Please be a bit more careful in who you choose to quote. Or, be prepared to defend his words with your own.
Well, we actually do have some say in our condition. A person may choose to drop out of high school. That choice will change his future condition and thus impact other choices he makes down the road. We have each been active participants in all of the events that have affected us over the years. All of these choices, just like all other events, were causally necessary, of course. But this does not change the fact that we did in fact do the choosing. Nor does it prevent us from learning from our experience to make better decisions in the future.
I bolded the part that indicates you are not really understanding what choice is in the context.Your reply made sense. It wasn't responsive, but, it makes logical sense. That there are two possibilities is only one criterion for choice making. The other is that the chooser understands both options. You are going to be hard-pressed, you actually state the circuit does not know, to demonstrate that a circuit construction is known to the circuit. As I see it a bit comes through and the circuit operates. If it has a context zero there will be one result if it has context "zero" there will be another result. It will do the same thing every time in the same context. Seems pretty deterministic to me. You still need to define choice operationally. "Unknown of the core" isn't an operational statement.So the most basic form is the JNZ instruction:So choice is behavior. Define it precisely in material terms. Here's the definition. Choice: an act of selecting or making a decision when faced with two or more possibilities. Your job is to supply the materiality, the operations. My sense is you'll have trouble with 'choice', 'decision', and 'faced'. Oh yeah, you'll probably have problems operationalizing behavior a well. This exercise request is legit since we are discussing determinism. The point I'm making is self-reference and words not materially defined don't fit within determinism. You need to specify what is the material basis for a mind for instance. Otherwise I'll just continue my freelance irritations to your non-operable anchored tech exercises.How brains organize is immaterial to the discussion of simplified systems for the sake of discussion in less complicated terms.
I can and do just as easily make my observations about simpler systems than neural ones, which STILL display process.
I don't really care about what religious beliefs you hold, or what justification on which you derive your failure to observe that ANY state machine is capable of "choice", and that this choice is either constrained from making valid choices of particular classes or "free": I have free will to choose which of many things I will pick up off my desk (I choose to pick up nothing); I do not have free will to choose which of many things to pick up off of your desk (it is not here, and thus I cannot pick stuff up off of it, nor would you let me were I present).
That you can, outside my observations, determine based on a calculation whether and which thing I pick up does not in any way drive what I will pick up. Were you to attempt to calculate which I would pick up off the desk within my observations, well, that's when things get complicated (mostly due to my contrary nature and desire to be strategically unassailable).
What is certain is that my decision will be mine, regardless. It does not matter that I am constructed; perhaps in some way both are true in that I exercise my free will in making decisions even while my will to determine what the actual process is, is in many ways constrained.
What is certain is that I CAN and DO discuss these things meaningfully. I cannot say the same for someone who thinks that they have no choice in the moment.
Jump of not zero. One possibility is that the context, unknown of the core, contains zero, and the PC executes jump. One possibility is that the context contains "zero" and the PC executes an increment.
These are both real possibilities for the architecture to encounter. One will happen, one will not and this choice will be made on the basis of the contents of a register.
We have observed that the rules of the system will allow a differential behavior on a singular element.
Evolutionary Psychology;
''In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.
Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind). The physical organization of the brain evolved because that physical organization brought about certain information-processing relationships -- ones that were adaptive.
It is important to realize that our circuits weren't designed to solve just any old kind of problem. They were designed to solve adaptive problems''
I’m saying that determinism doesn't allow alternate actions in any given instance in time. Compatibilists agree with acknowledge this, which is why the compatibilist definition is ''to act in accordance (unimpeded) to one's will'' and not the ability to have done otherwise.
According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.
...
If you work in the field of artificial intelligence, you should know that free will is not a factor. That processing information and selecting an option according to sets of criteria has nothing to do with free will.
You really should be more cautious in making blanket statements about a field that you have no expertise in. The usefulness free will in robotics has long been an open question, and it is a popular topic in AI. Here is a well-known 1999 paper by AI pioneer, John McCarthy: FREE WILL-EVEN FOR ROBOTS
I can't access the page. Not that it matters. Unless there has been some miraculous breakthrough, AI has yet to achieve consciousness, yet alone 'free will' - something that has been debated for centuries, two sides to the argument, compatibilism and incompatibilism.
If the issue hasn't been resolved in humans....good luck with computers that possess neither consciousness or will, only mechanical function.
Are you using the argument from authority? John McCarthy says this , therefore it is so?
...
You could, but that begs the question of what we mean by "will". The dispute is over how to define "will" and "free will". That's what you need to focus on. That's why I keep telling you that your argument is irrelevant--because you seem to think that the physical substrate that mental processes depend on defines free will. It does not, and you do not mount any convincing argument that it does. That's what makes your entire argument a fallacy of irrelevance--specifically, a genetic fallacy.
The dispute is about how to define will and whether 'free will' is compatible with determinism....and that is precisely what I have been focusing on all along, in case it has somehow slipped your mind.
As a reminder, incompatibilism argues that the compatibilist definition of free will is flawed for the given reasons.
For example.
''Wanting to do X is fully determined by these prior causes. Now that the desire to do X is being felt, there are no other constraints that keep the person from doing what he wants, namely X. At this point, we should ascribe free will to all animals capable of experiencing desires (e.g., to eat, sleep, or mate). Yet, we don’t; and we tend not to judge non-human animals in moral terms.'' - cold comfort in compatibilism.
''An action’s production by a deterministic process, even when the agent satisfies the conditions on moral responsibility specified by compatibilists, presents no less of a challenge to basic-desert responsibility than does deterministic manipulation by other agents. '
freedom
1: the quality or state of being free: such as
a: the absence of necessity, coercion, or constraint in choice or action - Merrium Webster
The bit either leads to the next event or it doesn't therefore, correcting myself, information comes through the logic system. Either zero or 'zero' are compared depending on which exists in the logic reference library.I bolded the part that indicates you are not really understanding what choice is in the context.Your reply made sense. It wasn't responsive, but, it makes logical sense. That there are two possibilities is only one criterion for choice making. The other is that the chooser understands both options. You are going to be hard-pressed, you actually state the circuit does not know, to demonstrate that a circuit construction is known to the circuit. As I see it a bit comes through and the circuit operates. If it has a context zero there will be one result if it has context "zero" there will be another result. It will do the same thing every time in the same context. Seems pretty deterministic to me. You still need to define choice operationally. "Unknown of the core" isn't an operational statement.So the most basic form is the JNZ instruction:So choice is behavior. Define it precisely in material terms. Here's the definition. Choice: an act of selecting or making a decision when faced with two or more possibilities. Your job is to supply the materiality, the operations. My sense is you'll have trouble with 'choice', 'decision', and 'faced'. Oh yeah, you'll probably have problems operationalizing behavior a well. This exercise request is legit since we are discussing determinism. The point I'm making is self-reference and words not materially defined don't fit within determinism. You need to specify what is the material basis for a mind for instance. Otherwise I'll just continue my freelance irritations to your non-operable anchored tech exercises.
Jump of not zero. One possibility is that the context, unknown of the core, contains zero, and the PC executes jump. One possibility is that the context contains "zero" and the PC executes an increment.
These are both real possibilities for the architecture to encounter. One will happen, one will not and this choice will be made on the basis of the contents of a register.
We have observed that the rules of the system will allow a differential behavior on a singular element.
Nowhere do I demand this for choice. You have shoehorned "understanding" in as if that is necessary to choice. It is not.
Understanding is a requirement for "intelligence" or "intelligent choice", but not for choice in general.
I did in fact make a typo, if is zero or "not zero", but you are not the sort to give charity to opposing viewpoints for the sake of understanding, else we would not be here.
AS it stands, the bit does not "come through" it is "looked at". A event happens, and as a part of that event something changes so the circuit looks at more information before doing a thing.
that something does the same thing in the same context makes it deterministic. That something does different things in different contexts means that those contexts generate differential choice within the system. That the system's choice is blind or reasoned doesn't matter to the fact a choosing operation happened.
these choices can be massive or complex. What is important is the consistency of the state machine that generates displays choice behavior.
The argument is that the compatibilist definition of free will is not sufficient to prove the proposition.
Distinctions do matter. Of course they do.
Setting the bird free of its cage doesn't establish the bird's freedom of will.
Freedom of speech, etc, doesn't establish freedom of will for the speaker.
The ball bounces freely down the hillside.
The bird dives and swoops freely through the air.... these are all actions that follow action production.
It is the nature of action production that is specific to the issue of freedom of will because it is specifically the means of action production that determines what action is action taken in a given instance in time.
The use of free in relation to action says nothing about the means, state or status of the activator of actions.
I argue that the term free will is redundant. The term 'free will' tells us us nothing about human behaviour, means or drivers. That we have will, but it's not free will.
It seems to me that the term 'free will' has become somewhat of an ideology, an aspiration.
To me, it just doesn't apply. Acting according to one's will is inevitable. We are evolved to act, and unless something prevents us from acting, we necessarily act according to our will.
Our choices are determined by mechanisms and processes not of our choosing, they are necessitated choices.
Freedom is defined as 'freedom from necessity.'
We don't choose our condition, yet our condition forms our being, our mind, character, thoughts and actions.
Please note the portion I highlighted. There is no either/or between the brain and the mind. It is the same system whether we are speaking of mental operations, like reasoning, evaluating, and choosing or brain neural functions.Evolutionary Psychology;
''In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.
Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind). The physical organization of the brain evolved because that physical organization brought about certain information-processing relationships -- ones that were adaptive.
It is important to realize that our circuits weren't designed to solve just any old kind of problem. They were designed to solve adaptive problems''
Well, we actually do have some say in our condition. A person may choose to drop out of high school. That choice will change his future condition and thus impact other choices he makes down the road. We have each been active participants in all of the events that have affected us over the years. All of these choices, just like all other events, were causally necessary, of course. But this does not change the fact that we did in fact do the choosing. Nor does it prevent us from learning from our experience to make better decisions in the future.
Our inherent condition began long, long before we decide to drop out of high school. We don't get to choose our parents, genetic makeup, nation, state, society, culture, social conditions, economic status, physical or mental capacities, all of which make us what we are, how we think and in relation to our immediate circumstances, what we think.
That, after all, is the nature of determinism.
Then you are arguing nonsense. A decision here is just an event that goes one of "two or more possible ways" every time the same way in the same context, actually finding resolutionChoice: an act of selecting or making a decision when faced with two or more possibilities.
First I would argue the logic system isn't faced with a decision.
I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.The question is not whether the bird has free will or not. The question is what does "freedom" mean.
The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.
To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".
For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.The question is not whether the bird has free will or not. The question is what does "freedom" mean.
The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.
To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".
For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.The question is not whether the bird has free will or not. The question is what does "freedom" mean.
The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.
To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".
For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.
Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
You're doing what DBT was doing, burying the distinction within the generality, and losing significant meaning. The consequences of the kayak going over the dam are pretty dire for the guy in the kayak. The kayak, the dam, the water, and the atoms in the rocks in the water, on the other hand, could care less.Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.The question is not whether the bird has free will or not. The question is what does "freedom" mean.
The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.
To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".
For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.
Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
I do see free will as existing on the scale of person, but I do not find the restriction of free will to things as grand only as persons a meaningful distinction.
The transistor serves in its truth just as easily.
As always, the system, in it's consistent response to consistent context and differential response to different context, creates "decision matrixes" on the system.
The existence of this decision matrix through time is the real important bit, but only pops into reality when the constraint pushes change of context.
I think Jarhyn is closer to nailing the issue. It takes both opportunity and constraint to bound the possibility of choice. He only misses it with his circuit in that the bit lacks both options being available at the critical juncture. He, being the circuit god, constrains the availability of options by asserting singular fixed context. Now his example as he presents it is what I think most people believe constitutes a choice. As I've shown his model is clearly not a choice since it doesn't include constraint and opportunity. In fact, deterministic behavior is always limited by opportunity.You're doing what DBT was doing, burying the distinction within the generality, and losing significant meaning. The consequences of the kayak going over the dam are pretty dire for the guy in the kayak. The kayak, the dam, the water, and the atoms in the rocks in the water, on the other hand, could care less.Why is interest necessary? It shall be as it is by it's nature and by the nature of it's constraints. We are not even talking about a stream, but a rock and a single atom within it. On the scale of the stream itself, things are mostly static. Again, there is no meaningful constraint around which to break on that order.A stream has no interests in where it flows. A guy in a kayak actually cares about whether he goes over a waterfall or not. Inanimate objects literally have no skin in the game, but the guy in the kayak does.I was actually thinking this through last night I sofar as coming to the realization that free will depends on constraint to operate: I was chasing my brain through circumstances wherein one would face a decision without ever facing being subordinated in will to limitations of action.The question is not whether the bird has free will or not. The question is what does "freedom" mean.
The bird's cage is a meaningful and relevant constraint upon the bird's freedom to fly away.
To have any meaning at all, a "freedom" must reference, either explicitly or implicitly, some meaningful and relevant constraint. A meaningful constraint prevents us from doing something that we want to do. A relevant constraint is something that we can actually be "free from" or "free of".
For example:
1. We set the bird free (from its cage).
2. We enjoy freedom of speech (free from political censorship).
3. We were offered free samples (free of charge).
4. We participated in Libet's experiment of our own free will (free of coercion and undue influence)
It is the rock in the stream that makes the atom of water break left or right, that forces decision on the basis of what shape the rock has, how it divides the stream.
Free will, like any other freedom, is the absence of any meaningful and relevant constraints that prevent the person from choosing for themselves what they will do.
Causal necessity is not is a meaningful or a relevant constraint. It is not a meaningful constraint because it does not prevent us from doing what we want to do (it is the source of our want). And it is not something that we could be free of even if we wanted to, so there is no reason to ever bring it up. It makes itself irrelevant by its own ubiquity.
I do see free will as existing on the scale of person, but I do not find the restriction of free will to things as grand only as persons a meaningful distinction.
The transistor serves in its truth just as easily.
As always, the system, in it's consistent response to consistent context and differential response to different context, creates "decision matrixes" on the system.
The existence of this decision matrix through time is the real important bit, but only pops into reality when the constraint pushes change of context.
If the water flow is free to control where the kayak goes, then the kayaker dies. If the kayaker controls where it goes, then the kayaker lives. So, the kayaker is the only object that has an interest in the outcomes of this event. And that interest in the consequences is producing considerable action. So, the interest serves as a motivational cause of action.