Right where you started imagining what goes on in their heads. It is related to the
Clever Hans phenomenon, which you may have heard of. Also, you may be familiar with Nagel's famous paper,
What It's Like to Be a Bat. Empathy between humans is possible, because we all tend to have the same equipment and experiences on which to base it. Imagining the experiences and reasoning processes in other animals is not as easy.
I see what you're saying, but I disagree that it relates to my post. Saying that two animals can both perceive an experience is not at all comparable to hypothesizing about the impact and interpretation of that experience. We know that dogs smell things. We know that dogs can be taught behavior - they learn. I haven't opined on their feelings in any fashion. A robot can perceive and learn... that observation requires neither empathy nor in-depth knowledge of its programming to note.
I don't think that your outline was sufficiently detailed to make that case. For one thing, you need to explain what an "experience" is in terms of QM. Otherwise, how can you do your clustering? The problem with trying to attribute cognitive significance to QM is that everything physical involves QM, not just brains. What is it about brains that makes an appeal to QM so appealing?
First the short answer - QM is essentially the source of randomness in all things. It's a well established field that has repeatedly demonstrated the fundamentally non-deterministic nature of, well, everything. That's why it ends up coming up in discussions of free will vs. determinism. Because it is fairly well established that reality is NOT perfectly deterministic - it is stochastic.
Now for the longer bit. The process I outlined doesn't rely on QM for the experience part of it. I thought it was relatively clear... but then I also know exactly what I meant
. The "experience" is the set of perceptions received by the entity in question. That is all pretty macro level, and there's unlikely to be any meaningful impact.
QM comes in at the neural synapse level - it's the randomness effect.
What I'm talking about is essentially an evolving cluster algorithm... only instead of the cluster algorithm comparing the new data element set being compared to ALL of the existing cluster centroids, there would be a boundary condition. The new set would be compared to a subset of centroids (depending on whether we're talking about parallel or series processing) UNTIL such time as a sufficient fit to centroid has been achieved. In the case of a thinking algorithm, the data set would be the set of perceptions associated with an event. The brain would then effectively sample past experience clusters, and compare the set of experiences this time against the centroid of the cluster. It would keep sampling until it finds a centroid that is "close enough" to the current experience. It wouldn't look exhaustively to find the "best fit" centroid - it wouldn't be identifying the true nearest neighbor. It would only look until it finds a neighbor that is within a reasonable radius (although it's a tad more complex to talk about radii in a n-dimensional space where n itself is a variable
).
If we were running this as a machine learning model, we'd have a pseudo-random starting seed that dictates which centroid is the first compared, and each subsequently compared centroid would be chosen at random as well. That would mean that different runs of the model would produce different outcomes, and because it's an evolving algorithm, it means that with each repeated run, the centroids themselves would change. For an entity with a very small range of perceptive sets (for example, a snail simply has fewer receptor cells than a dog does), there would necessarily be very few possible clusters, and they're likely to converge - so that over a large number of observed entities, the clusters of perception-response pairs would be small and very similar to each other. For an entity with a larger range of perceptive sets (the dog), they would be more likely to develop a larger number of clusters or perception-response pairs (they'd exhibit more complex behaviors, which are less predictable). For a large enough observed sample of that entity, there would be some convergence in clustering (they'd start to form general patterns of behavior across individuals), but there would still be a fair bit of residual variation from one entity to another of the same type.
In this sort of model, QM is the physical mechanism for randomization. It would be less random than a computer generated psuedo-random element would be... but it would still be present.
That means that the exact same entity, at the exact same time, with the exact same past experiences, and the exact same perception experience set... may have a different response than in the first run.
That's all only getting us to how randomness (the QM element) plays into the framework for learning. That's not agency, it's not choice. It's the mechanism for learned and adaptive behavior.
To get from there to agency, the entity needs to be able to reference past experience sets (both as individual specific experiences and as representative centroid sets), and be able to extrapolate from those past experience to imagine a set of possible future experiences. They need to be able to learn and adapt from 1) extrapolated, imagined scenarios or 2) vicarious experiences.
- - - Updated - - -
This might seem like it's coming out of left field, but in hopes there might be a smidgen of relevancy, is there compatibility between the notions of contingent truths and necessary events?
Maybe? I don't know what you mean in this context.
I understand each word that you used, but when you put them together in that order I have no idea what you're saying!
- - - Updated - - -
But the observer has no control over quantum states. The observer cannott consciously access quantum states or effect changes as desired through an act of will.
How do you know this? How do you even know what the observer is?
Are we not talking about our own experience as conscious people?
Do you as a conscious person have access to the underlying mechanisms of your conscious experience, including your very existence?
Do you as a conscious person have access to quantum activity, manipulating it to your advantage and will?
Manipulation of the mechanism for randomness is not necessary for the existence of a decision-making framework.
- - - Updated - - -
Now let's add a memory function. Let's assume the entity has a way to store past experiences, and to compare the current experience to past experiences and select the response that "best fits" the current experience based on what it has learned. This is no longer perfectly deterministic. It's close, but not exact. Some degree of uncertainty has entered the system at this point. The reaction now depends not only on the current experience, but also on what other experiences the entity has had. This makes it less predictable. Unless we know all of the experiences that the entity has had, as well as the action taken in response to each experience, we can't perfectly predict a future reaction. It also means that as soon as you have more than one entity, it gets much more complicated to predict the general behavior of this type of entity in response to stimulus.
I think you misunderstand determinism. The process you describe could be 'perfectly deterministic'.
Determinism doesn't entail predictability.
You seem to have a very different understanding of determinism than I do. What do you mean by determinism?
ETA: I re-read, and I see what you mean. I apologize, my WoT has things out of order. Or I guess, the progression isn't out of order, but I'm referencing the implicit randomness before I've introduced it. My presentation of the concept wasn't particularly good.