I see what you're saying, but I disagree that it relates to my post. Saying that two animals can both perceive an experience is not at all comparable to hypothesizing about the impact and interpretation of that experience. We know that dogs smell things. We know that dogs can be taught behavior - they learn. I haven't opined on their feelings in any fashion. A robot can perceive and learn... that observation requires neither empathy nor in-depth knowledge of its programming to note.
Saying that two animals can perceive an "experience" is tantamount to saying that their perceptions are the same, which I think is wrong. Experience is an interpretation of perceptions. Both can perceive the same phenomenon, but their experiences will represent different interpretations of the same phenomenon. As for robots, there is experimental evidence to support the conclusion that people impute human characteristics to, and empathize with, program interfaces that they know are not human. I am referring to the work of Byron Reeves and Cliff Nass (See
The Media Equation).
First the short answer - QM is essentially the source of randomness in all things. It's a well established field that has repeatedly demonstrated the fundamentally non-deterministic nature of, well, everything. That's why it ends up coming up in discussions of free will vs. determinism. Because it is fairly well established that reality is NOT perfectly deterministic - it is stochastic.
Short response: It is not at all well established that randomness has anything at all to do with free will. Even gamblers make a conscious choice to gamble, but that doesn't mean they randomly choose to do so. Moreover, it is only established that QM events are in a sense unpredictable, not random. Don't confuse unpredictability with randomness. It is a philosophical position that the unpredictability is the result of randomness.
Now for the longer bit. The process I outlined doesn't rely on QM for the experience part of it. I thought it was relatively clear... but then I also know exactly what I meant
. The "experience" is the set of perceptions received by the entity in question. That is all pretty macro level, and there's unlikely to be any meaningful impact.
Perceptions are interpretations. They differ from sensations. Since neither can exist without neurons firing, they would necessarily also be subject to QM "randomness". You may well think you know what you are talking about, but I don't see a logical connection here with the process of making a decision.
QM comes in at the neural synapse level - it's the randomness effect.
I'm still waiting for you to explain how this relates to the process of making a choice. A lot of things also happen at the "synapse level". What does "randomness" have to do with it?
What I'm talking about is essentially an evolving cluster algorithm... only instead of the cluster algorithm comparing the new data element set being compared to ALL of the existing cluster centroids, there would be a boundary condition. The new set would be compared to a subset of centroids (depending on whether we're talking about parallel or series processing) UNTIL such time as a sufficient fit to centroid has been achieved. In the case of a thinking algorithm, the data set would be the set of perceptions associated with an event. The brain would then effectively sample past experience clusters, and compare the set of experiences this time against the centroid of the cluster. It would keep sampling until it finds a centroid that is "close enough" to the current experience. It wouldn't look exhaustively to find the "best fit" centroid - it wouldn't be identifying the true nearest neighbor. It would only look until it finds a neighbor that is within a reasonable radius (although it's a tad more complex to talk about radii in a n-dimensional space where n itself is a variable
).
I am familiar with programs that use statistics to identify clusters, but there are much simpler ways to describe the decision making process that don't use buzzwords like "centroid", appeal to n-dimensional space, or low level neural processing. I grant you that we may some day be able to give a soup-to-nuts explanation of how neurons relate to choices, but, as I've already explained (and I think you acknoweldge), we can simulate the same decision-making process at a higher level in machines that are not animal brains. You have not even begun to explain what you mean by "experience" or "perception" when you talk about cluster centroids. All you've done is add a layer of jargon. You seem to be trying to explain the properties of an emergent systemic behavior in terms of the properties of components that make up the system. I think that Subsymbolic has been trying to get at why that is not necessarily a good idea (although he does seem to get a little hard-on every time someone talks about neurons and stochastic processes.
)
If we were running this as a machine learning model, we'd have a pseudo-random starting seed that dictates which centroid is the first compared, and each subsequently compared centroid would be chosen at random as well. That would mean that different runs of the model would produce different outcomes, and because it's an evolving algorithm, it means that with each repeated run, the centroids themselves would change. For an entity with a very small range of perceptive sets (for example, a snail simply has fewer receptor cells than a dog does), there would necessarily be very few possible clusters, and they're likely to converge - so that over a large number of observed entities, the clusters of perception-response pairs would be small and very similar to each other. For an entity with a larger range of perceptive sets (the dog), they would be more likely to develop a larger number of clusters or perception-response pairs (they'd exhibit more complex behaviors, which are less predictable). For a large enough observed sample of that entity, there would be some convergence in clustering (they'd start to form general patterns of behavior across individuals), but there would still be a fair bit of residual variation from one entity to another of the same type.
You really don't need to explain unsupervised learning techniques for bootstrapping machine learning. It's a great idea, but there is a problem in scaling up those "proof of concept" toy programs. Nevertheless, machine learning specialists recognize the problem with hand-crafting training sets, so they understand the need for self-programming machines. They have found practical uses for these programs, but it is a mistake to confuse simulation with reality. Hence, object recognition is still a huge problem for AI researchers. We can do it up to a point, but scale-up still isn't there.
In this sort of model, QM is the physical mechanism for randomization. It would be less random than a computer generated psuedo-random element would be... but it would still be present.
That means that the exact same entity, at the exact same time, with the exact same past experiences, and the exact same perception experience set... may have a different response than in the first run.
You've gotten so far down in the weeds that you seem to have forgotten the original question--how "randomness" (aka "unpredictability") relates to the decision-making process. What does it explain about "free will"? That is the question that I've been asking and that you seem unable to address. You disagree with my claim that decision-making is a fully-determined process. Fine. Explain what it is about decision-making that is random.
That's all only getting us to how randomness (the QM element) plays into the framework for learning. That's not agency, it's not choice. It's the mechanism for learned and adaptive behavior.
Right, although just being able to simulate learning does not mean that you've illuminated the processes that are at play in brain activity when humans learn. We all know that some kind of associative processing is going on, but it seems to exist at a much higher (emergent) level than individual synapses. You still have to explain why people do the things they do before you can hold them responsible for their actions. Random neural processes are not a good place to start.
To get from there to agency, the entity needs to be able to reference past experience sets (both as individual specific experiences and as representative centroid sets), and be able to extrapolate from those past experience to imagine a set of possible future experiences. They need to be able to learn and adapt from 1) extrapolated, imagined scenarios or 2) vicarious experiences.
Using the expression "experience set" does nothing more than just the word "experience" here. It begs the question of what defines the set. Or is your "set" just a random collection of elements?