Copernicus
Industrial Grade Linguist
...In fact, it comes up a lot at conferences, because the overarching goal of AI is to replicate intelligent behavior in machines. It is of particular interest in the field of robotics, because robots have all the same problems that humans do in navigating in uncertain environments. They have to make the same kind of choices, and we model their behavior on human and animal behavior.
Intelligent behaviour in mechanical systems is not willed behaviour. There is no 'will' involved, just function. Function that is determined by circuitry and software.
You seem to think that human bodies are not mechanical systems for some reason. This is just doubling down on a genetic fallacy. Because robots are not "fleshy machines", you believe that bodies made of different materials cannot be made to perform the same functions. At least, that appears to be the unwarranted conclusion you are jumping to.
To conflate intelligence with will is a category error. They are two different things. An animal may not be considered intelligence, yet have both will and the ability to act in accordance to its will.
It would also be a mistake to conflate plain will with free will. We have will, but it is not free will.
Both the will of the animal and the actions that follow are necessitated by antecedents beyond the control of the animal.
Nobody has conflated intelligence with will, so that is a straw man. Obviously, we want people to make intelligent decisions, but they have been known to make stupid ones. Animals have brains and are obviously possess varying degrees of intelligence. The only reason they've been inserted in this discussion is because they don't have the same sense of morality that humans do, and moral responsibility is an issue that we associate with free will. However, in a debate over causal necessity where a concept like "free will" is on the chopping block, I don't see how moral responsibility is going to escape the same doom. I consider the moral responsibility issue as tangential, because morality only concerns human interactions, and even humans exempt each other from responsibility for their actions under many different circumstances. Animals are usually not held responsible for their actions by humans unless they can be trained to behave the way we want them to.
I think you believe that you have, but you don't show much evidence of understanding what definitions do or how they work.
I know exactly what definitions are. Just as I know exactly why compatibilists, given the nature of determinism and the nature of brain function, decision making, action initiation, etc, must define free will in the way they do.
Sorry, but you really don't seem to understand the descriptive nature of definitions, no matter how much you protest otherwise. You won't accept ordinary English usage in the definition of "free will" and insist on prescribing your own definition that seeks to make causal necessity a part of the definition. That begs the question of whether causal necessity ought to be part of the definition. That's why compatibilists consider hard determinists to be engaging in a fallacy of definition wrt "free will". That's what the debate is about, so it can't be made a premise in your argument.
...
We've discussed Pereboom's Manipulation Argument in the past, and it has more to do with problems inherent in assigning moral responsibility than in actual free will. We judge the behavior of others because we are all expected to adhere to a moral code. However, that has more to do with moral philosophy than what it means to choose from a set of alternative acts of will. What does it mean to be responsible for one's actions? His article was very influential among philosophers, but it attracted as much criticism as praise. Although moral responsibility is often associated with free will, it doesn't actually define it. People may not always be held accountable for their actions, just as we don't hold animals accountable for theirs. Lacking a proper sense of moral responsibility does not mean that one lacks free will.
Moral responsibility is related to free will. As is the nature of cognition, decision making and action initiation.
Moral responsibility is related to free will, but free will does not entail moral responsibility. It is only about the role of free will in assigning moral responsibility, and there are many instances of free will that have nothing to do with morality. For example, animals and infants are responsible for the decisions they make, but not necessarily to adult humans. We teach children to be morally responsible in exercising free will, but we don't judge their actions as if they were already adults. There's a learning curve involved, and they don't suddenly acquire free will when they achieve adulthood.
Another way of putting it being:
Abstract
If one’s solution to the free will problem is in terms of real causal powers of agents then one ought to be an incompatibilist. Some premises are contentious but the following new argument for incompatibilism is advanced:
1. If causal determinism is true, all events are necessitated
2. If all events are necessitated, then there are no powers
3. Free will consists in the exercise of an agent’s powers
Therefore, if causal determinism is true, there is no free will; which is to say that free will is incompatible with determinism, so compatibilism is false.
Others have already dealt with this. From my perspective, it lacks a definition of what the word "powers' means, so it requires reading the paper that this is an abstract for in order to really discuss its merits intelligently.