Might be a replicant
- Jul 7, 2014
- It's a desert out there
- Basic Beliefs
If you become an artificial intelligence researcher (as I have been), then you learn a lot about nondeterministic behavior in chaotic environments. The philosophical question that I am injecting into this free will discussion is the following: Can a learning robot have "free will"? One's willingness to answer that question affirmatively depends on how far one is willing to extend the concept to cover an entity whose every action is predetermined, including its ability to learn from experience and adapt to new situations. At some point, everything about the behavior of that robot can be predetermined, but it can make choices and learn to change its behavior when faced with similar obstacles in the future. The robot doesn't know anything more about its future than human beings and other biological organisms. However, whether we say that the robot has "free will" depends on whether it is able to learn and adapt to changing circumstances. What makes its will "free" is that it is free to change its future behavior. In effect, it can regret past behavior, but not change it. It can try to be a better robot in the future. In theory, a robot could even have predetermined routines for improving its learning processes--just as humans can learn to be better learners.
In my view, an AI can have agency. And given a sufficiently large number of inputs to an decision matrix, the outcome of a decision made by an AI can become imperfectly predictable and stochastic in nature.
If a robot has the programming to allow it to learn, and to adapt to changing externalities, and to form preferences (or to reprioritize goals perhaps), and the flexibility to form extrapolative hypotheses and test them... then there's no reason to believe that a robot cannot have free will in the sense that I understand it. I think it's entirely plausible that we will develop AIs that have will.
I think it might be a lot less plausible that we develop AIs that have curiosity, imagination, and emotions. I don't think it's impossible, but I think that aspect of sapience is much more complex than volition.
We are largely in agreement, but I would quibble with your last paragraph. Imagination is necessary in us "robots", because that is the workspace we use to predict future outcomes. It is no accident that natural languages often express the future tense in ways that are very different. For example, English has past and present tense inflection on verbs, but it expresses future tense with a separate auxiliary verb--"shall" or "will". Imaginary scenarios are also expressed with modals. For example, "should" is technically a past tense inflection of "shall". Some languages even seem to lack a special tense marker for future events. Curiosity and emotions also play a functional role in decision-making, since they are factors that motivate and determine the emergence of priorities in making choices. Even robots have to be motivated to recharge their energy sources (i.e. "eat"), discard waste (i.e. "dead batteries"), and repair themselves (i.e. get "fresh batteries"). So it is natural for roboticists to build those factors into their creations as they perfect them. Right now, we are at the stage where humans still have to change the robots' diapers, but it will be necessary for them to be self-sufficient if we keep sending them out to explore moons and planets.
I was more trying to say that I think programming imagination and emotions in a robot seems like it would be more complex than an adaptive learning algorithm. I think we either already have or are right on the cusp of adaptive learning algorithms already. The intuition/irrational/imagination element I don't think we're close to at the moment.
Then again, I also think that agency and intelligence are very different things.