• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What is free will?

What kinds of outcomes are possible and how and when outcomes are achieved are decisions made by the programmer.

They are programmed outcomes, not decisions.

Calling them decisions is just a loose use of the word.
 
What kinds of outcomes are possible and how and when outcomes are achieved are decisions made by the programmer.

They are programmed outcomes, not decisions.

Calling them decisions is just a loose use of the word.
And your understanding of what a ”program” is very limited.

But what disturbs me most is your naivity and intellectual dishonesty.
I believe many of share your feeling of that human thinking is something special and wonderous. but. we realize that we dont know, and therefore search for the truth, whereas you, on the other is cock sure.
Interesting is that you never explains how you know this, never gives a working argument for why you are right...
Which is weird because the purpose of a discussion board like this is to discuss HOW we reached the conclusion.
Not to throw conclusions at each other.
 
What kinds of outcomes are possible and how and when outcomes are achieved are decisions made by the programmer.

They are programmed outcomes, not decisions.

Calling them decisions is just a loose use of the word.

Again, it is the criteria that determines which decision is made in any given instance of decision making, a human chess master, for instance, can make his or her chosen move in a game and the software that he or she is playing against must respond to that move. As there are possibly several different moves that can be made, it is a matter of making the best possible move in order to win the game. The human player may do that, but so does the software.
 
How would you tell the difference between the software that makes choices and has free will from the software that makes choices and does not have free will?

Good question.

First suggested answer: If the system is a human, then ultimately, you couldn't tell with certainty, nor can that human system.

Second suggested answer: You can tell with at least a fairly high degree of confidence. Because free will in any meaningful, actual sense is almost certainly impossible, as far as we can reasonably know, using argument or evidence. In which case the answer is most likely no free will in both cases (both or any systems).

If you want to ask instead about degrees of freedom or something akin to that, I think it might be possible to say that some systems have more than others.
 
What kinds of outcomes are possible and how and when outcomes are achieved are decisions made by the programmer.

They are programmed outcomes, not decisions.

Calling them decisions is just a loose use of the word.
And your understanding of what a ”program” is very limited.

But what disturbs me most is your naivity and intellectual dishonesty.
I believe many of share your feeling of that human thinking is something special and wonderous. but. we realize that we dont know, and therefore search for the truth, whereas you, on the other is cock sure.
Interesting is that you never explains how you know this, never gives a working argument for why you are right...
Which is weird because the purpose of a discussion board like this is to discuss HOW we reached the conclusion.
Not to throw conclusions at each other.

Be careful. Your ignorance is like a knife to people like me.

I am cocksure about things everybody should be sure about.

Things like I can look at my finger and with a thought cause it to move when and as I command.

I can repeat this experiment over and over.

Those that deny this simple fact are living in a delusion.

And they cannot explain how or why it happens.

They have convoluted stories that make no rational sense.

Something about a brain wanting to move the finger and then tricking me into thinking I am wanting to move it.

Stories unfit for children.

Stories it takes a lot of indoctrination to accept.

If a person does not recognize the distinction between a brain and a mind they are unfit for this discussion.
 
More and more I'm convinced 'free will' doesn't mean 'choosing freely', it means 'having a will that is free'. The very nature of action is 'willing', to be able to do so freely makes it free.

Someone would not have free will by having their will constrained: e.g. put in prison

A robot? I think this question is only relevant insofar as it matters to sentient beings who worry about the degree of their own freedom. To a robot the question isn't relevant unless it becomes sentient. Otherwise it's just a machine acting on an algorithm.
 
A brain generates the ability to make decisions. By means of its architecture and its information processing ability a brain is able to select options based on a set of criteria acquired, things to avoid, things that are pleasurable, rewarding, etc, through experience with the world....so it could be said that it's ability is form of freedom, but it is not free will. Will is not the agency of decision making. The decisions that are made are determined by an interaction of information processing, input from the external world and the critical factor of memory function.
 
I think that we look at free will from a purely psychological perspective, as if it had nothing to do with society or social interactions. Its significance is not the mental capacity to behave freely, but to be held accountable by others for our actions. When we sign contracts, we do that of our "own free will". We hold people accountable for breaking the law, but we make exceptions for those whom we consider unable to control their actions in a way that we consider safe and normal. For a person to be considered at blame, that person has to have been able to have acted differently for an act that one condemns.

The point is that human society regulates behavior through social bonds. We don't blame machines for their behavior, because they can't be influenced that way. They can't become friends or enemies except in a metaphorical sense. They will do what they do because of the way someone made them incapable of the social constraints that govern human interactions. Animals, OTOH, can be held responsible to a limited extent, especially pets like cats and dogs. So they are thought of as having a limited form of free will. They can be held responsible for their actions up to a point.
 
A brain generates the ability to make decisions. By means of its architecture and its information processing ability a brain is able to select options based on a set of criteria acquired, things to avoid, things that are pleasurable, rewarding, etc, through experience with the world....so it could be said that it's ability is form of freedom, but it is not free will. Will is not the agency of decision making. The decisions that are made are determined by an interaction of information processing, input from the external world and the critical factor of memory function.

See, where I think we're parsing definitions of 'will' here. To me, 'I' is the sum total of processes of my body, and my 'will' is the movement of my body through time and space. My will would cease to be free if I could no longer do the things my internal state wants to do.

I always find this difficult to put into words, but I find it strange when people talk of lacking free will like they're a ghost inside an alien, un-free body. This whole perspective is completely untenable to me, because people are that information processing body / system. Angst over knowing who and what you are? Doesn't make sense to me, and is better than the alternative, imo.

The other part of it that I think is underrated is that the information/processing body is oriented in a way to seek out pleasure and avoid pain. And so this 'will' is actually moving us toward things we 'want' to do, even if the 'wants' are just a result of a mish mash of genetics and environmental experiences. This means that our lives are actually pretty pleasant, most of the time, regardless of how you want to define it.
 
I think that we look at free will from a purely psychological perspective, as if it had nothing to do with society or social interactions. Its significance is not the mental capacity to behave freely, but to be held accountable by others for our actions. When we sign contracts, we do that of our "own free will". We hold people accountable for breaking the law, but we make exceptions for those whom we consider unable to control their actions in a way that we consider safe and normal. For a person to be considered at blame, that person has to have been able to have acted differently for an act that one condemns.

The point is that human society regulates behavior through social bonds. We don't blame machines for their behavior, because they can't be influenced that way. They can't become friends or enemies except in a metaphorical sense. They will do what they do because of the way someone made them incapable of the social constraints that govern human interactions. Animals, OTOH, can be held responsible to a limited extent, especially pets like cats and dogs. So they are thought of as having a limited form of free will. They can be held responsible for their actions up to a point.

This also raises a good point, which is what differentiates a person who does and does not have free will, legally. If we can agree that the psychological distinction between say, a three year old, and a twenty-one year old isn't arbitrary, then what is it that's different between them to say that the adult chose freely, but the child didn't? Is it the cliche old enough to know better? We are free to do have done otherwise because of our knowledge of distinct paths?

This idea would also speak to the maxim that knowledge is freeing. The more we know, the better/more easily we can direct our lives.
 
One other interesting thought experiment comes to mind:

Do people who have no academic understanding of materialism or free will feel unfree? If you asked them if they had free will but framed it in a way that wasn't primed academically, would they be likely to say no? If not, where does this sense of freedom come from? Why would we describe it as, say, an illusion, and not a real aspect of human experience that is derived from our neuro-physiology?
 
Hopefully, we all acknowledge that software can make choices.

I cannot accept as true that (and thereby cannot acknowledge that) software can make choices. I believe it's a category error to say of a non-thinking entity that it can decide between choices--and by "thinking" and "decide", I don't mean the extremist stance of it. A child can discern the difference between man and machine, but left alone to think without guidance and training, an adult will become sloppy and extremist in thought such that they can no longer tell one from the other. It's often associated with an inability to accept boundaries.

In language, a word's meaning isn't always exact; from an observation of a cross section of imprecise words, their varying usage can be said to pulsate. One person narrows its meaning and has the consequential effect of excluding something while another broadens its scope that includes things that would otherwise not belong, but when there is a sense of associated boundaries linked to the word's usage, we have some that drop that word in place of something else while others talk of probabilities and exceed any semblance of its uncorrupted meaning -- and soon find themselves seeing no distinction between bald and not bald, man and machine, biological entity and a computer.

How would you tell the difference between the software that makes choices and has free will from the software that makes choices and does not have free will?

Software neither make choices nor has free will. See, some of us refuse to link software to those things, just like although some of us would gladly accept that trees are not happy, we would not take that next step and regard them as unhappy.

Free will has to do with wants. If we're dealing with something devoid of wants, we're not dealing with a free will issue. Consider a coffee pot (with software) that has instructions to perform a particular task (like cut on at 6:45 A.M.). If it's an older coffee pot with a strained motor, we may use our beautiful language and make all sorts of darling child like remarks, just like we might say of a tree with wilting leaves that it's feeling unhappy today, but some of us (most of us) in our own beliefs adhere to the links and boundaries of words--despite what we might say in jest.

Does the coffee pot want to cut on? Does it have wants? Recall, without wants, there is no free will issue. Others can argue otherwise; if they want to.

There is more to free will than wants, but not much more. There is one more thing. That's right--just two things in all. Wants is one of them.
 
A brain generates the ability to make decisions. By means of its architecture and its information processing ability a brain is able to select options based on a set of criteria acquired, things to avoid, things that are pleasurable, rewarding, etc, through experience with the world....so it could be said that it's ability is form of freedom, but it is not free will. Will is not the agency of decision making. The decisions that are made are determined by an interaction of information processing, input from the external world and the critical factor of memory function.

See, where I think we're parsing definitions of 'will' here. To me, 'I' is the sum total of processes of my body, and my 'will' is the movement of my body through time and space. My will would cease to be free if I could no longer do the things my internal state wants to do.

I always find this difficult to put into words, but I find it strange when people talk of lacking free will like they're a ghost inside an alien, un-free body. This whole perspective is completely untenable to me, because people are that information processing body / system. Angst over knowing who and what you are? Doesn't make sense to me, and is better than the alternative, imo.

The other part of it that I think is underrated is that the information/processing body is oriented in a way to seek out pleasure and avoid pain. And so this 'will' is actually moving us toward things we 'want' to do, even if the 'wants' are just a result of a mish mash of genetics and environmental experiences. This means that our lives are actually pretty pleasant, most of the time, regardless of how you want to define it.

The totality of conscious self, me, I, character, personalty, language, psychological identity, body image, relationships, etc, etc, is a construct of the brain. It being the activity of the brain that brings us as conscious entities into being, generates our identity, our self awareness, our experience of the world, our thoughts and feeling, the type decisions we make, both consciously and unconsciously, and how we physically carry them out.

Brain state - both neural architecture and inputs/environment - is everything in terms of consciousness, decision making and response.
 
I think that we look at free will from a purely psychological perspective, as if it had nothing to do with society or social interactions. Its significance is not the mental capacity to behave freely, but to be held accountable by others for our actions. When we sign contracts, we do that of our "own free will". We hold people accountable for breaking the law, but we make exceptions for those whom we consider unable to control their actions in a way that we consider safe and normal. For a person to be considered at blame, that person has to have been able to have acted differently for an act that one condemns.

The point is that human society regulates behavior through social bonds. We don't blame machines for their behavior, because they can't be influenced that way. They can't become friends or enemies except in a metaphorical sense. They will do what they do because of the way someone made them incapable of the social constraints that govern human interactions. Animals, OTOH, can be held responsible to a limited extent, especially pets like cats and dogs. So they are thought of as having a limited form of free will. They can be held responsible for their actions up to a point.

If a dog remembers that he was punished for chewing up the cushions, he may refrain from doing it again, a cost to benefit ratio, if the pleasure of ripping and tearing cushions exceeds the threatened consequences, the cushions are most probably done for.

Much the same process as we have in the rule of law, a cost to benefit ratio for Criminals, a deterrent for normally law abiding citizens.
 
Hopefully, we all acknowledge that software can make choices.

I cannot accept as true that (and thereby cannot acknowledge that) software can make choices. I believe it's a category error to say of a non-thinking entity that it can decide between choices--and by "thinking" and "decide", I don't mean the extremist stance of it. A child can discern the difference between man and machine, but left alone to think without guidance and training, an adult will become sloppy and extremist in thought such that they can no longer tell one from the other. It's often associated with an inability to accept boundaries.

I think that you first need to establish what you think it means to "make a choice". Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.

In language, a word's meaning isn't always exact; from an observation of a cross section of imprecise words, their varying usage can be said to pulsate. One person narrows its meaning and has the consequential effect of excluding something while another broadens its scope that includes things that would otherwise not belong, but when there is a sense of associated boundaries linked to the word's usage, we have some that drop that word in place of something else while others talk of probabilities and exceed any semblance of its uncorrupted meaning -- and soon find themselves seeing no distinction between bald and not bald, man and machine, biological entity and a computer.

I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.

How would you tell the difference between the software that makes choices and has free will from the software that makes choices and does not have free will?

Software neither make choices nor has free will. See, some of us refuse to link software to those things, just like although some of us would gladly accept that trees are not happy, we would not take that next step and regard them as unhappy.

There is no logic to your paragraph. You start out with a bald faced assertion of the conclusion you want to reach. Then you declare that you refuse to associate choice-making programs with free will because trees can't be happy or unhappy. How does that make sense?

Free will has to do with wants. If we're dealing with something devoid of wants, we're not dealing with a free will issue. Consider a coffee pot (with software) that has instructions to perform a particular task (like cut on at 6:45 A.M.). If it's an older coffee pot with a strained motor, we may use our beautiful language and make all sorts of darling child like remarks, just like we might say of a tree with wilting leaves that it's feeling unhappy today, but some of us (most of us) in our own beliefs adhere to the links and boundaries of words--despite what we might say in jest...

They make coffee pots that can be programmed to come on at a certain time, but that is not the same as making a robot that can navigate an obstacle course or play a game of chess. There is a very rudimentary sense in which the coffeemaker "decides" to come on--a simple conditional branch in a computer program that is associated with the value of an internal clock. However, programs that operate under uncertainty are much more complex.


As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will. My position is that we don't normally hold machines responsible for their actions, because we haven't programmed them to respond to human dominance/submission hierarchies. So we don't think of them as having "free will". However, there is no good reason to believe that we could not some day build machines whose behavior could be influenced and modified through social interactions.

The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".
 
Back
Top Bottom