Alrighty folks, this is going to get complicated. Let's talk about a response model. Of course, this is only a model and reality is more complex. But we can contrast that response model to a "thinking" model that incorporates indeterminacy and randomness, and there we should begin to see some of the differences that we would colloquially call "free will" or "agency".
Response Model
In this model, the entity has a set of perceptors - ways to perceive the world around them. This can be eyeballs or cameras, piezo electric crystals or hair follicles. They can come in all sorts of shapes and sizes, and can perceive any number of different types of events. For simplicity, let's consider an entity that has a visual perceptor.
The entity also has a variety of physical responses - things that the entity's physical form can do. They can blink or close shutters on their camera, they can recoil from physical stimulus, they can move appendages. For simplicity, let's consider an entity that can increase and decrease the aperture for their visual perceptor.
Another element needed in this model is a system of measurement - a gauge that gives some indication of "good" and "bad" in terms of the experience the entity interprets from the perceptions. So for example, a human might interpret very bright light directly into their eyes as unpleasant, something that can damage their vision. A robot might interpret very bright light as "bad", something that can damage the camera receptors.
So in the very simplistic example here, we have an entity that experiences an event via a perceptor, gauges that experience as good/bad, and reacts with a physical response. Now, whether this is a disemodied eyeball or an automated camera, we have a nearly deterministic algorithm involved. The entity can react to stimulus by increasing or decreasing the amount of light entering the perceptor. This is response to stimulus. At this point, the entity is
perfectly predictable.
The fundamental nature of this entity's process is effectively "If A then B".
Now let's add a memory function. Let's assume the entity has a way to store past experiences, and to compare the current experience to past experiences and select the response that "best fits" the current experience based on what it has learned. This is no longer perfectly deterministic. It's close, but not exact. Some degree of uncertainty has entered the system at this point. The reaction now depends not only on the current experience, but also on what other experiences the entity has had. This makes it less predictable. Unless we know all of the experiences that the entity has had, as well as the action taken in response to each experience, we can't perfectly predict a future reaction. It also means that as soon as you have more than one entity, it gets much more complicated to predict the general behavior of this type of entity in response to stimulus.
The fundamental nature of this entity's process is effectively "If like A then B".
Thinking Model
Here's the first major divergence in this approach. When we talk about "thinking"in this context, we're talking about forecasting - extrapolation, hypothesizing, and imagining. This is where we grant the entity the ability to take past experiences, project them with differences, and make a guess about what the best reaction to that future experience would be. This requires the entity to be able to look at past experiences, and categorize or cluster those experiences by things that were similar and things that were different - how much alike those past experience were to each other, and how many "types" of experiences they've had. Clusters of experiences will form over time, but they won't be the same clusters for each entity.
Indeterminate Bounded Thinking Model
Now let's assume that there's a small degree of randomness involved - enough that it's explainable by quantum fluctuations. Sometimes the electron in that neuron jogs left instead of right. Also assume that the entity doesn't consult every possible experience, or even every possible cluster of experiences - they're going to consult "enough" experiences and cluster centroids to be able to get "sufficient" fit for the current experience. To translate, they only process until the find something that is close enough to the current experience to merit handing off the the assigned reaction. In this case, not only is the behavior not perfectly predictable at the entity level, it's not predictable at the aggregate level either. But we're working with a pretty simple entity here - visual stimuli only, with very limited responses available.
Go ahead and scale that up, to include all of the types of perceptors that humans have, in all their variability. And include all of the different types of physical responses available to us. And include the capacity for knowledge, and the inherent pattern-finding processes. Stick that all together, and you now have a situation where the human entity can reference very complex past experience clusters, with a very small element of randomization involved... and become definitively non-deterministic in nature. Choices are being made, both in the moment of the experience and in anticipation of an experience. And these are very real choices - there is a framework for how to make a choice, but the actual choice made is subject to randomization, comparison, and a valuation of similarity.
We can splice this six ways from sunday... but at the end of the day, we've got an inherently non-deterministic entity, with a very complex
and evolving set of perception-response clusters, for whom even perfect knowledge of initial conditions doesn't guarantee predictability of final state. This is an entity that inherently makes forecast estimates of best outcomes based on assumptions of inputs prior to the occurrence of those input events.
If that doesn't qualify as agency and choice, in short free will, then there's no discussion. At that point, at least one side of this argument is engaged in belief-based argumentation. Possibly both sides.
ETA: IIRC, AI is at the point of learning machines - algorithms that can incorporate new experiences, and form response patterns based on the similarity of a current experience to past experiences. But we're not yet at the stage of forecasting - AI can't yet project past experiences onto a hypothetical future experience and determine an appropriate response. And AI at this point has limited perception capability, limited storage capacity, and limited pattern recognition ability. They're all getting better, and I expect to see fairly robust AI developing the capacity for agency and choice within my lifetime. We're on the right track, it's down to processing and storage capacity, and some very complex algorithms now.
The question becomes how much inherent randomness will there be in a designed entity as compared to an evolved entity. If we develop circuits small enough to be affected by quantum fluctuations, then there's a much more real possibility for scary-smart-level AI than we're currently playing with.