I think that you first need to establish what you think it means to "make a choice".
Choosing may be described as an operation that (1) inputs two or more options, (2) applies some appropriate criteria for comparative evaluation, and based on that evaluation, (3) outputs a single choice, usually in the form of "I will X", where X is the thing we will do.
Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.
Machines are tools that we create to help us accomplish our own will. They have no will of their own. They literally "have no skin in the game". If a machine begins acting as if it had a will of its own, we'd call a repairman.
I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.
Machines have no interest in the outcomes. Only biological organisms have an interest in the outcomes, because they can suffer, and they can die. So, evolution has provided organisms with biological drives that enhance the likelihood that they will survive and reproduce. Take hunger, for example. There were no doubt variations in our earlier species that lacked a sense of hunger and quickly went extinct.
As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will.
Machines lack a will of their own. Lacking a will makes free will non-applicable to machines.
The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".
No no no. Free will is a secular issue that is tied to assessing responsibility. If you did something bad because you chose to, then you are subject to correction. If you did something bad because someone forced you against your will, then you are innocent and the person who coerced you is held responsible. If you did something bad because of a significant mental illness, such as one that subjected you to hallucinations and delusions, or subjected you to an irresistible impulse, or that simply impaired your ability to reason, they you are innocent and the mental illness is held responsible and is subject to correction by medical or psychiatric treatment.
So, the notion of free will existed prior to the point where it was adopted by theists to give their omnipotent and omniscient God a "get-out-of-jail free card". If God restrained himself from interfering in your choices then it was claimed that you were responsible for your actions. The problem is that if God is given omnipotence and omniscience, then he also becomes omni-responsible.
I think that you first need to establish what you think it means to "make a choice".
Choosing may be described as an operation that (1) inputs two or more options, (2) applies some appropriate criteria for comparative evaluation, and based on that evaluation, (3) outputs a single choice, usually in the form of "I will X", where X is the thing we will do.
Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.
Machines are tools that we create to help us accomplish our own will. They have no will of their own. They literally "have no skin in the game". If a machine begins acting as if it had a will of its own, we'd call a repairman.
I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.
Machines have no interest in the outcomes. Only biological organisms have an interest in the outcomes, because they can suffer, and they can die. So, evolution has provided organisms with biological drives that enhance the likelihood that they will survive and reproduce. Take hunger, for example. There were no doubt variations in our earlier species that lacked a sense of hunger and quickly went extinct.
As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will.
Machines lack a will of their own. Lacking a will makes free will non-applicable to machines.
The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".
No no no. Free will is a secular issue that is tied to assessing responsibility. If you did something bad because you chose to, then you are subject to correction. If you did something bad because someone forced you against your will, then you are innocent and the person who coerced you is held responsible. If you did something bad because of a significant mental illness, such as one that subjected you to hallucinations and delusions, or subjected you to an irresistible impulse, or that simply impaired your ability to reason, they you are innocent and the mental illness is held responsible and is subject to correction by medical or psychiatric treatment.
So, the notion of free will existed prior to the point where it was adopted by theists to give their omnipotent and omniscient God a "get-out-of-jail free card". If God restrained himself from interfering in your choices then it was claimed that you were responsible for your actions. The problem is that if God is given omnipotence and omniscience, then he also becomes omni-responsible.