• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Compatibilism: What's that About?

Why are you talking about plans in subjective space
I am not. I am talking about the fact that plans held by things are objective parts of mechanical systems that objectively determine their behaviors.

The list of instructions to be acted upon by a processor is no less an object than the dwarf itself, which is then an object composed itself of a series of bits on a field.

Just like our plans are objects composed in reality of a series of chemical potentials across a field of neurons.


Your inability to step back from "subjectivity" and look at it as the complete mechanical OBJECT that it is and see that it OBJECTIVELY had an OBJECT that is nonetheless "arbitrary, but real, instructions" is your problem here.

It doesn't matter whether we process reality directly. It does not matter if the thing that measures the reality and checks the requirement checks a very abstracted piece of reality. It doesn't matter that the real requirements are more "this much chemical was dumped here in the brain (as a result of hand moving)" or "the door was open".

They are essentially the same thing as far as "the universe" is concerned. You hand-waving "SuBjEcTiViTy!!!111" at it doesn't make the fact that it's really an object acting like a fucking robot that happens to be able to program itself.

Unless the plan is actually a an operationally defined formula it is useless
LOLWUT?

No, the plan is only useless if it fails, the plan is only useless if it is not °°°. And often it is even useful then in after-action-review.

You must really hate yourself if you don't want to be involved with decisions over who you yourself will be and what you will do.

The only thing that can manage and provide tight oversight on yourself is yourself. It's the only thing wired for providing immediate feedback and analysis.

The only way to achieve certain behavioral goals is then to do analysis, find out what you did wrong, test it...

Are you seriously hung up on some bad belief that the scientific method includes automatic science, and experimentation on and within the self?
'choosing' does not equate to 'free will.
Nobody said it does. Sometimes the choice is "life! There is a gun in my face (physical systemic requirement of undeniable lizard-brain drive)" and we recognize that means that the adjoining will came from external to the drive system of the person speaking. It came from the drive system of (maybe the guy with the gun?).

In this situation, choosing is not free will.

So no choice does not of its own imply "free will" of the person choosing, in all situations.

The ability of systems to execute arbitrary instruction sets implies •••.

The ability of systems to fail to execute instructions to the satisfaction of a requirement implies °°°.

Choosing requires a ••• that contains multiple options (direct or by reference), which will result in a single selection of new secondary •••. Whether the ••• that ended up choosing on a secondary ••• was °°° depends on whether or not the secondary ••• is itself °°°:

it chose that "my ••• shall be *** unto ***'s completion" which, suggests *** may not constitute a °°° •••.

So in short, you have made an unargued assertion and thus an assertion fallacy.
Selection does determine will
Ok, so, here you are admitting that (at least some) determined systems have WILL.

Now, ask yourself: Q: can a will fail unto it's requirement?

If so, it has a truth value associated with it, a property derived by it's intersection through reality. This truth value is °°°.

Congratulations, in finding ••• you found °°°.

A: yes, it may

Will is 'written' before it is made conscious
It is written by the brain. You have some failed understanding about the meaning of what is "conscious" versus not.

You make all kinds of claims that this is meaningful but it really isn't.

has already been determined and fixed
It has been determined and fixed, by me, milliseconds before it is narrated back to me. I'm still the one who determined and fixed it. Of course it takes a little bit of time for that data to come back in a loop around to me, and I still have the ability in those milliseconds to send a NO, if I don't like what I see, or what any of the other processes in my brain say of what they see.

Generally it is a causal requirement, in fact, for something to be done BEFORE it may be evaluated by "the peanut gallery".

You keep claiming that the time delay of the narration means it was not consciously decided upon. That doesn't imply anything of the sort, and it's fundamentally bad science to say it does.

You (the you that makes decisions and gets an awareness of what happened narrated back) exist in a capacity where you get all the data you need to review what is happening in your head.

Let me reiterate: serial killers have a responsibility to kill themselves before they kill anyone else.
One need not be aware to react.
One does need to be aware to react. Reaction requires the system be aware of the action.

One need not be "aware that they are aware" to be aware.

Strike one.
If someone taps your crazy bone either in your knee or elbow. Those are reactions. You become aware after you react (jerk).
And there's a set of nerves in my knee that is aware it happened and lets me know how aware it was after the fact. Often it happens so late, the only thing I can do about it is to know it happened.

Thankfully when it comes to distally driven behaviors, usually they aren't too problematic.

As it is though, we can still change and impact certain "normally reflexive" actions through mindfulness.

One of the parts in here is actually "me" and the things that "part that is me" is aware of are being reported not up to a different system but back around directly to itself.

It still takes a moment for such recurrent signals to come back around: they can only come back around at or more than a full "clock cycle" later!
I'm pretty sure you are watching the doctor tap below your patella when you react to the tap. IOW your argument is BS.
And yet it still happens when I am not looking. It's almost as if some piece of me was aware that a thing struck me there...
 
Why wouldn't it be simpler to say:
If God is omniscient and knows that I will do x then I will do x.
If God is omniscient and knows that I will do x, then I will not do y.

It would not, because that fails to capture what we want to know. We already know that if God is omniscient, then I will do x and not y if God foreknows that I will do x.

What we want to know is whether I am free to do y instead of x. That’s the claim of the theological or epistemic determinist — that if God, or any omniscient observer, knows in advance that will do x, then I must do x. IOW, my act is not free (contingent, could be otherwise) but necessary.

Parsing it out in the possible worlds heuristic shows where the argument goes awry. In situations like this, we are inclined, correctly, to think that something here is necessarily true. It’s just not what we think it is.

This is what’s not true: If God knows in advance that I will do x, then I must (necessarily) do x.

This is what is true: Necessarily (if God knows in advance that I will do x, then I will (but not must) do x).

It’s a logical error to assign the modal necessity operator merely to the consequent (then I will do x) of the antecedent (If God knows in advance that I will do x). One must assign it conjointly to the antecedent and consequent together.

Once we do this, we realize that the following statement is also valid:

Necessarily (if God knows in advance that I will do y, then I will (but not must) do y).

We now see that it is logically possible for me to do either x or y even in the presence of an omnipotent agent who foreknows my every act. What is not possible is for me to escape detection of my act, but that’s totally different from my affirmed ability to do either x or y.

From this we can further see that the idea that God’s foreknowledge of my act makes my act happen gets the flow of truth making exactly backward. God’s foreknowledge does not supply the truth grounds for what I do. Rather, what I do, supplies the truth grounds for what god foreknows.

This is also the solution, as noted earlier, to logical determinism, the idea that if it’s true today that tomorrow I will do x, then tomorrow I must do x (no free will). Rather, tomorrow I will freely do x or y, and whatever I do, will supply the truth grounds of the prior proposition. This is the true solution to Aristotle’s Sea Battle argument. Aristotle concluded that if it’s true today that tomorrow a sea battle will occur, then tomorrow it must occur and fatalism holds. Finding this conclusion unpalatable, he concluded that no proposition can be true prior to the fact that it describes. But modal logic, to which Aristotle had no recourse, proves that his solution is superfluous.


 

I've heard it suggested the God's omnipotence allows him to create not just the universe, but the rules by which that universe operates. Introducing the notion of other "possible worlds" would seem to open the door for a world where it is possible for God to be omniscient, and know that I will do x, but I will do y.

If God is defined in the standard Christian way, he necessarily exists (exists at all possible worlds) and is necessarily omnipotent (is omnipotent at all possible worlds).

Keep in mind that “possible worlds” is an abstraction, not meant to be taken literally like the parallel worlds of sci-fi. It means logically possible worlds, worlds that might have obtained even if they did not. You have captured the essence of this yourself when you pointed that even if yesterday I had salad instead of steak, it is, was, and always will be, possible that I had steak instead of salad. In modal-speak these are called possible non-actual worlds, and there are a great many more (infinitely more?) of them than the actual world that we inhabit.

If God as defined in the standard Christian way, there is no (logically) possible world at which he foreknows I will do x and I do y instead. Even the standard Christian god, it is thought, cannot break the laws of logic.

St. Anselm thought it was possible to provide a logical proof of the existence of God. Known as the Ontological Argument, it failed. However, in the 20th century, Kurt Godel cleverly adapted Anselm’s argument to modal logic and came up with what is known as the Modal Ontological Argument for god. Several others have since modalized Anselm’s argument.

(Almost!) no one thinks Godel proved God exists, but what he did seem to show in a rigorously logical fashion is that if God exists, he necessarily exists, and if he does not exist, he necessarily fails to exist. Thus the existence of God is restored as an empirical matter, with a dish of necessity. God becomes like the unproven Goldbach’s Conjecture. If the conjecture turns out to be true, then it is necessarily ture (true at all possible worlds). If it turns out to be false, then it is necessarily false (false at all possible worlds).

As a side note, not everyone thinks that possible worlds are merely an abstraction. The analytic philosopher David K Lewis (whose paper “Are We Free to Break the Laws?” I previously linked) wrote a book called On the Plurality of Worlds in which he argued that all possible non-actual worlds (counterfactual worlds relative to us) actually exist, but are only actual to their own inhabitants. Thus there are actual worlds (actual to their inhabitants only) where donkeys talk, pigs fly, and the Greek Gods are literally real. That’s because all those worlds are logically possible, but even so, there are no worlds where triangles have four sides. Lewis said that when people actually grasped what he was saying, they always gave him “an incredulous stare.” No doubt!
 
Why wouldn't it be simpler to say:
If God is omniscient and knows that I will do x then I will do x.
If God is omniscient and knows that I will do x, then I will not do y.

It would not, because that fails to capture what we want to know. We already know that if God is omniscient, then I will do x and not y if God foreknows that I will do x.

What we want to know is whether I am free to do y instead of x. That’s the claim of the theological or epistemic determinist — that if God, or any omniscient observer, knows in advance that will do x, then I must do x. IOW, my act is not free (contingent, could be otherwise) but necessary.

Parsing it out in the possible worlds heuristic shows where the argument goes awry. In situations like this, we are inclined, correctly, to think that something here is necessarily true. It’s just not what we think it is.

This is what’s not true: If God knows in advance that I will do x, then I must (necessarily) do x.

This is what is true: Necessarily (if God knows in advance that I will do x, then I will (but not must) do x).

It’s a logical error to assign the modal necessity operator merely to the consequent (then I will do x) of the antecedent (If God knows in advance that I will do x). One must assign it conjointly to the antecedent and consequent together.

Once we do this, we realize that the following statement is also valid:

Necessarily (if God knows in advance that I will do y, then I will (but not must) do y).

We now see that it is logically possible for me to do either x or y even in the presence of an omnipotent agent who foreknows my every act. What is not possible is for me to escape detection of my act, but that’s totally different from my affirmed ability to do either x or y.

From this we can further see that the idea that God’s foreknowledge of my act makes my act happen gets the flow of truth making exactly backward. God’s foreknowledge does not supply the truth grounds for what I do. Rather, what I do, supplies the truth grounds for what god foreknows.

This is also the solution, as noted earlier, to logical determinism, the idea that if it’s true today that tomorrow I will do x, then tomorrow I must do x (no free will). Rather, tomorrow I will freely do x or y, and whatever I do, will supply the truth grounds of the prior proposition. This is the true solution to Aristotle’s Sea Battle argument. Aristotle concluded that if it’s true today that tomorrow a sea battle will occur, then tomorrow it must occur and fatalism holds. Finding this conclusion unpalatable, he concluded that no proposition can be true prior to the fact that it describes. But modal logic, to which Aristotle had no recourse, proves that his solution is superfluous.

I'm still having trouble with "must" and "will". But I solved (well, to my satisfaction anyway) the distinction between "can" and "will".

If "necessity" is a mode, then is "possibility" another mode?

Possibility is a notion with a specific function, to deal with the common problem that we are not omniscient, and we are often uncertain as to what will happen. To deal with that uncertainty as to what "will" happen we switch to the language and logic of possibilities, things that "can" happen. For example, "Will that red light up ahead remain red when we arrive, or will it change to green?" We don't know. But we are certain that it "can" remain red and that it "can" change to green. Only one thing will happen, but two things can happen. And, while we are still uncertain as to what "will" happen, we are certain as to what "can" happen.

So, "will" and "can" are two distinct concepts with different functions. Is this about "modes"?
 
Why wouldn't it be simpler to say:
If God is omniscient and knows that I will do x then I will do x.
If God is omniscient and knows that I will do x, then I will not do y.

It would not, because that fails to capture what we want to know. We already know that if God is omniscient, then I will do x and not y if God foreknows that I will do x.

What we want to know is whether I am free to do y instead of x. That’s the claim of the theological or epistemic determinist — that if God, or any omniscient observer, knows in advance that will do x, then I must do x. IOW, my act is not free (contingent, could be otherwise) but necessary.

Parsing it out in the possible worlds heuristic shows where the argument goes awry. In situations like this, we are inclined, correctly, to think that something here is necessarily true. It’s just not what we think it is.

This is what’s not true: If God knows in advance that I will do x, then I must (necessarily) do x.

This is what is true: Necessarily (if God knows in advance that I will do x, then I will (but not must) do x).

It’s a logical error to assign the modal necessity operator merely to the consequent (then I will do x) of the antecedent (If God knows in advance that I will do x). One must assign it conjointly to the antecedent and consequent together.

Once we do this, we realize that the following statement is also valid:

Necessarily (if God knows in advance that I will do y, then I will (but not must) do y).

We now see that it is logically possible for me to do either x or y even in the presence of an omnipotent agent who foreknows my every act. What is not possible is for me to escape detection of my act, but that’s totally different from my affirmed ability to do either x or y.

From this we can further see that the idea that God’s foreknowledge of my act makes my act happen gets the flow of truth making exactly backward. God’s foreknowledge does not supply the truth grounds for what I do. Rather, what I do, supplies the truth grounds for what god foreknows.

This is also the solution, as noted earlier, to logical determinism, the idea that if it’s true today that tomorrow I will do x, then tomorrow I must do x (no free will). Rather, tomorrow I will freely do x or y, and whatever I do, will supply the truth grounds of the prior proposition. This is the true solution to Aristotle’s Sea Battle argument. Aristotle concluded that if it’s true today that tomorrow a sea battle will occur, then tomorrow it must occur and fatalism holds. Finding this conclusion unpalatable, he concluded that no proposition can be true prior to the fact that it describes. But modal logic, to which Aristotle had no recourse, proves that his solution is superfluous.

I'm still having trouble with "must" and "will". But I solved (well, to my satisfaction anyway) the distinction between "can" and "will".

If "necessity" is a mode, then is "possibility" another mode?

Possibility is a notion with a specific function, to deal with the common problem that we are not omniscient, and we are often uncertain as to what will happen. To deal with that uncertainty as to what "will" happen we switch to the language and logic of possibilities, things that "can" happen. For example, "Will that red light up ahead remain red when we arrive, or will it change to green?" We don't know. But we are certain that it "can" remain red and that it "can" change to green. Only one thing will happen, but two things can happen. And, while we are still uncertain as to what "will" happen, we are certain as to what "can" happen.

So, "will" and "can" are two distinct concepts with different functions. Is this about "modes"?
Kind of? Either way I think it's a false distinction. Modal logic in deterministic systems gets a little weird, which is again how this argument often comes to happen in the first place.

Essentially when you get so tight with your systemics that you can say "determinism", much like having to be precise about °°° and •••, you also have to be precise about things like "can", which is not to say there is any real possibility something that "could" happen but didn't ever having happened.

Merely, that the will was constructed, was assessed as "provisionally °°°", and then made "actually not °°°" by the reality of the ••• not being selected for pursuit. In reality the ••• remains "provisionally °°°" until you are beyond the point where the moment potential required for that is no longer available, and you have made it "clearly not °°°".

In this way, "can" does not ever really mean "•••" it just means "it was there during ••• selection and nothing other than the selection of another ••• made it not °°°" or...

"Could have", ??? is used to denote •••(A) made not-°°° by °°° •••(*) selecting •••(B), where •••(B) excludes •••(A)
 
You say that because you prefer not to consider any possibility beyond what you want to be true and believe to be true.
Excuse me while I break my promise to my husband, go back to FSTDT, and nominate this post of yours for a Shiny Mirror Award.

I have described the reasons why compatibilism fails as a definition of free will and as an argument, and have supported what I said.




Not really. I love my husband to the extent that this ••• shall not be °°° nor held by any °°° ••• of my own.

Actually try reading the definitions and doing the math with them and just SEE if ever once there has to be a "freedom" of your libertarian variety imposed there.

You will find that there is not.

Anyway, I repeat...

Anyone who holds a ••• to kill people unilaterally ought use whatever leverage they have to guarantee the ••• to kill people points at themselves such that it is °°° with respect to that requirement.

That's nonsense. You have not understood compatibilism, incompatibilism or determinism....or the nature of cognition and decision making.

I could try to explain again, provide quotes, case studies, experiments, narrator function, PFC, etc, but I suspect it'd be a waste of time.

Stick to your beliefs and be happy.
 

Nice ad hom, though against a web site and not an individual. Is there a particlar term for that? Anyway, I’ve no idea whether Aeon is a good source or not, but it’s irrelevant, because the author is a respected expert in the field with plenty of studies and publications to his credit. He should not be believed just because of this — that would be an appeal to authority — but his credentials are certainly relevant to the claims he makes. And he makes it quite clear, with evidence and examples, that the brain is not a computer in any relevant way. Please address the substance of his arguments, if you can, rather than taking a shot at the platform for his arguments.

I was commenting on your choice of material. What do you think it means? What do you think the author is arguing?

Can you say it in your own words?
 
The brain as a computer is a metaphor. As the above linked article notes, we’ve been using metaphors for thousands of years for the brain, with the latest technology of each era being adopted as a metaphor for how the brain works.

I take no stand on whether a computer is, or could be, conscious. But the point of the above-linked article is that the brain does not really work like a computer at all. That said, I’m not sure how the issue of how the brain actually works really has bearing on free will. I‘m prepared to accept that a computer has a kind of free will and possibly even consciousness, that consciousness could be substrate independent, without endorsing the idea that the brain is a computer or resembles one in any relevant sense.

A metaphor? The brain literally acquires and processes information.

Your brain does not process information … very first clause of the subhead to the article. Later he he explains in depth why this is so. Deal with that, please.

Deal with what? As it is you who quoted ''Your brain does not process information. Your brain is not a computer.'' it implies that you don't believe that the brain processes information or 'computes?'

I did not say the brain is a linear computer like a laptop, it's a parallel information processor....and the article just used the word 'computer' to convey that the brain is an information processing without going into details on the nature of computing in the brain.

You seized the word 'computer' like a lifeline for the failed argument that is compatibilism.

''Our brain computes information in different steps. So imagine a lot of stimuli arriving us and our five senses, millions of stimuli each day. And they first enter our sensory memory, this sensory memory works like, you can imagine it as being some kind of buffer. So this only works for milliseconds and only on the information that go past this milliseconds are forewarded into the working.....''
 
You have not understood compatibilism, incompatibilism or determinism....or the nature of cognition and decision making
I design systems that make decisions and perform cognition.

I have designed them specifically to be deterministic.

This requires understanding things fairly well completely on this front; it's damn near impossible to do that on accident. It took biology billions of years to accomplish it and untold multitudes of failures

Stick to your beliefs and be happy.
Again with your shiny mirrors.
 
How the brain processes information;
Genetically determined circuits are the foundation of the nervous system.
  1. Neuronal circuits are formed by genetic programs during embryonic development and modified through interactions with the internal and external environment.
  2. Sensory circuits (sight, touch, hearing, smell, taste) bring information to the nervous system, whereas motor circuits send information to muscles and glands.
  3. The simplest circuit is a reflex, in which sensory stimulus directly triggers an immediate motor response.
  4. Complex responses occur when the brain integrates information from many brain circuits to generate a response.
  5. Simple and complex interactions among neurons take place on time scales ranging from milliseconds to months.
  6. The brain is organized to recognize sensations, initiate behaviors, and store and access memories that can last a lifetime.

How neurons form long term memory:

“Memory is essential to all aspects of human existence. The question of how we encode memories that last a lifetime is a fundamental one, and our study gets to the very heart of this phenomenon,” said Greenberg, the HMS Nathan Marsh Pusey Professor of Neurobiology and study corresponding author.

The researchers observed that new experiences activate sparse populations of neurons in the hippocampus that express two genes, Fos and Scg2. These genes allow neurons to fine-tune inputs from so-called inhibitory interneurons, cells that dampen neuronal excitation. In this way, small groups of disparate neurons may form persistent networks with coordinated activity in response to an experience.

“This mechanism likely allows neurons to better talk to each other so that the next time a memory needs to be recalled, the neurons fire more synchronously,” Yap said. “We think coincident activation of this Fos-mediated circuit is potentially a necessary feature for memory consolidation, for example, during sleep, and also memory recall in the brain.”

While this micro-information is always interesting, it does not change anything about the macro issue of free will. A person considers different cars and chooses one. A person taps on several different melons and chooses one. A person considers several people to ask out on a date and chooses one. A person browses the restaurant menu of possible dinners and chooses one.

Of course it matters. If our will, be it conscious or unconscious, plays no part in determining actions, we have no logical claim to free will.

It just becomes an ideology, something to feel good about...''oh, we have free will.'' Isn't that nice? Like having a Guardian Angel or Teddy Bear, snuggly and warm.


Whether it is "I will buy this car", or "I will buy this melon", or "I will invite Cindy to the prom", or "I will have the salad for dinner", it is all a matter of a "will" that we choose for ourselves and by ourselves, free of coercion and undue influence.

Nothing is free from inner necessitation. ''I will buy this car'' has a large web of causality, antecedents that bring you to a decision that has no alternative. You buy the car only to realize the next day that you should have chosen the other one.

We are in two parts, the unconscious mechanisms and the conscious experience of self and agency, and the problem for the idea of free will is that the unconscious mechanisms determine what is experienced consciously, self awareness and a sense of agency. That is the illusion of conscious will, the disconnect between the means of experience and conscious experience itself.

What the unconscious means of our experience does is not subject to will or wish, just information processing.


I don't know where you get the idea that anyone thinks this is not happening in our own brains. You may surprise us with new facts about the brain, but these facts change nothing about the nature of the basic question: "Who is making the choice that will control what will happen next?" In the absence of coercion or other undue influence, it is still us in control.

I have never said or even suggested that it's not happening in our brains. It is the brain that generates us, our experience of self, thought and conscious action. I have said that repeatedly.

Quote:
"And the electrical activity in these neurons is known to reflect the delivery of this chemical, dopamine, to the frontal cortex. Dopamine is one of several neurotransmitters thought to regulate emotional response, and is suspected of playing a central role in schizophrenia, Parkinson's disease, and drug abuse," Montague says. "We think these dopamine neurons are making guesses at likely future rewards. The neuron is constantly making a guess at the time and magnitude of the reward."

"If what it expects doesn't arrive, it doesn't change its firing. If it expects a certain amount of reward at a particular time and the reward is actually higher, it's surprised by that and increases its delivery of dopamine," he explains. "And if it expects a certain level (of reward) and it actually gets less, it decreases its level of dopamine delivery."

Thus, says Montague, "what we see is that the dopamine neurons change the way they make electrical impulses in exactly the same way the animal changes his behavior. The way the neurons change their predictions correlates with the behavioral changes of the monkey almost exactly."

Whether one feels compelled or not, the decision making process itself is determined by an interaction of input and memory through the agency of neural circuitry.

As it stands, neither conscious or unconscious will has the power to alter determined actions or outcomes, this process of inner necessitation excludes any notion of free will..
 
this process of inner necessitation excludes any notion of free will
You keep asserting that, and yet °°° and ••• do not suffer revocation of their existence.

All you can do is keep asserting this without argument and it's kind of sad.

I accept that you lack the Power to create a ••• in which you read the definition of ••• and °°° and do the simple exercise of calculating whether the dwarf's •••(open the door) is °°°(is door locked?).

Even so, your burying your head in the sand to these concepts will not stop us from correcting your misconceptions and arguing against them.
 
Whether it is "I will buy this car", or "I will buy this melon", or "I will invite Cindy to the prom", or "I will have the salad for dinner", it is all a matter of a "will" that we choose for ourselves and by ourselves, free of coercion and undue influence.

Nothing is free from inner necessitation. ''I will buy this car'' has a large web of causality, antecedents that bring you to a decision that has no alternative. You buy the car only to realize the next day that you should have chosen the other one.

I'm actually counting on both inner necessitation and outer necessitation to choose my car. What features would I like to have in my new car? Now, what are my actual possibilities here on the car lot? Both inner and outer causes play a role in my decision.

And these are not "undue influences" because these are the parameters that define my real set of options. They are expected from the time I walked onto this car lot.

But what I don't expect is for the salesman to pull a gun on me and say, "Buy this car right now or I'll blow your brains out!". That would be an "undue influence".

You understand this, right?

We are in two parts, the unconscious mechanisms and the conscious experience of self and agency, and the problem for the idea of free will is that the unconscious mechanisms determine what is experienced consciously, self awareness and a sense of agency. That is the illusion of conscious will, the disconnect between the means of experience and conscious experience itself.

That doesn't matter! How the brain works is how the brain works. But I know for a fact that I am here at the car lot, and that it is up to me, and to no one else, to decide which car I will purchase. This is what free will is about. It is simply a freely chosen, "I will buy that car". And what is my choosing free of? It is free of coercion and undue influence. Nothing more. Nothing less.

It is not freedom from my brain. It is not freedom from how my brain works. It is not freedom from unconscious processing. It is not freedom from antecedent causes. It is simply freedom from coercion and undue influence. Nothing more. Nothing less.

What the unconscious means of our experience does is not subject to will or wish, just information processing.

And with that you step outside of neuroscience, because neuroscience would attempt to explain volition, not eliminate it. The notion that explaining something explains it away is a reductionist fallacy.

It is the brain that generates us, our experience of self, thought and conscious action. I have said that repeatedly.

Good. Then you should understand that it is my brain that is choosing to buy the car. And it is my brain that looks about the car lot at all the alternate possibilities. And I'm pretty sure that those possibilities are quite real, because I'm about to choose one of them and drive it home.

Quote:
"And the electrical activity in these neurons is known to reflect the delivery of this chemical, dopamine, to the frontal cortex. Dopamine is one of several neurotransmitters thought to regulate emotional response, and is suspected of playing a central role in schizophrenia, Parkinson's disease, and drug abuse," Montague says. "We think these dopamine neurons are making guesses at likely future rewards. The neuron is constantly making a guess at the time and magnitude of the reward."

"If what it expects doesn't arrive, it doesn't change its firing. If it expects a certain amount of reward at a particular time and the reward is actually higher, it's surprised by that and increases its delivery of dopamine," he explains. "And if it expects a certain level (of reward) and it actually gets less, it decreases its level of dopamine delivery."

Thus, says Montague, "what we see is that the dopamine neurons change the way they make electrical impulses in exactly the same way the animal changes his behavior. The way the neurons change their predictions correlates with the behavioral changes of the monkey almost exactly."

Whether one feels compelled or not, the decision making process itself is determined by an interaction of input and memory through the agency of neural circuitry.

As it stands, neither conscious or unconscious will has the power to alter determined actions or outcomes, this process of inner necessitation excludes any notion of free will..

So, Montague, whoever he is (once again I could not follow the broken link you posted) apparently has the illusion that free will means freedom from the chemical makeup and functioning of the brain. Where would he get such a silly idea?!

Free will is simply when we decide for ourselves (via our own brain's normal way of functioning) what we will do, while free of coercion and undue influence. It is really a simple concept and is commonly understood by most people.

Just give them a simple example, like the bank clerk being forced to give the bank's money to the robber who is pointing a gun at her. And ask them, "Is she acting of her own free will?". Then when they answer, "No, she is not acting of her own free will", ask them how do they know. And they will happily explain to us that free will means not being forced to do something against your will.

It's not rocket science. And it's not even brain science. It is how an empirical event is assessed when determining a person's responsibility for their actions.
 
The brain as a computer is a metaphor. As the above linked article notes, we’ve been using metaphors for thousands of years for the brain, with the latest technology of each era being adopted as a metaphor for how the brain works.

I take no stand on whether a computer is, or could be, conscious. But the point of the above-linked article is that the brain does not really work like a computer at all. That said, I’m not sure how the issue of how the brain actually works really has bearing on free will. I‘m prepared to accept that a computer has a kind of free will and possibly even consciousness, that consciousness could be substrate independent, without endorsing the idea that the brain is a computer or resembles one in any relevant sense.

A metaphor? The brain literally acquires and processes information.

Your brain does not process information … very first clause of the subhead to the article. Later he he explains in depth why this is so. Deal with that, please.

Deal with what? As it is you who quoted ''Your brain does not process information. Your brain is not a computer.'' it implies that you don't believe that the brain processes information or 'computes?'

I did not say the brain is a linear computer like a laptop, it's a parallel information processor....and the article just used the word 'computer' to convey that the brain is an information processing without going into details on the nature of computing in the brain.

You seized the word 'computer' like a lifeline for the failed argument that is compatibilism.

I did that? I explicity said that whether the brain is a computer or not, or acts like a computer or not, has no bearing on the free will argument. I raise the point about computation only to underscore the fact that you repeatedly make assertions about issues that are contested as if they are already settled fact. It is not a settled fact that the brain is, or is like, a computer — quite the contrary. Just as it’s not an agreed upon fact of what determinism is, as you keep falsely maintaining. It’s not even a settled fact that determinism is true — if the world is entirely quantum, which it appears to be, then determinism is false. The determinism we seem to get at the so-called classical level would then be stasticial.

So now the brain is a parallel processing computer?

No, it’s not.

From the book:

From the dawn of the industrial revolution, people have viewed the brain as some sort of machine. They knew there weren't gears and cogs in the head, but it was the best metaphor they had. Somehow information entered the brain and the brain-machine determined how the body should react. During the computer age, the brain has been viewed as a particular type of machine, the programmable computer. And as we saw in chapter 1, AI researchers have stuck with this view, arguing that their lack of progress is only due to how small and slow computers remain compared to the human brain. Today's computers may be equivalent only to a cockroach brain, they say, but when we make bigger and faster computers they will be as intelligent as humans.

There is a largely ignored problem with this brain-as-computer analogy. Neurons are quite slow compared to the transistors in a computer. A neuron collects inputs from its synapses, and combines these inputs together to decide when to output a spike to other neurons. A typical neuron can do this and reset itself in about five milliseconds (5 ms), or around two hundred times per second. This may seem fast, but a modern silicon-based computer can do one billion operations in a second. This means a basic computer operation is five million times faster than the basic operation in your brain! That is a very, very big difference. So how is it possible that a brain could be faster and more powerful than our fastest digital computers? "No problem," say the brain-as-computer people. "The brain is a parallel computer. It has billions of cells all computing at the same time. This parallelism vastly multiplies the processing power of the biological brain."

I always felt this argument was a fallacy, and a simple thought experiment shows why. It is called the "one hundred–step rule." A human can perform significant tasks in much less time than a second. For example, I could show you a photograph and ask you to determine if there is cat in the image. Your job would be to push a button if there is a cat, but not if you see a bear or a warthog or a turnip. This task is difficult or impossible for a computer to perform today, yet a human can do it reliably in half a second or less. But neurons are slow, so in that half a second, the information entering your brain can only traverse a chain one hundred neurons long. That is, the brain "computes" solutions to problems like this in one hundred steps or fewer, regardless of how many total neurons might be involved. From the time light enters your eye to the time you press the button, a chain no longer than one hundred neurons could be involved. A digital computer attempting to solve the same problem would take billions of steps. One hundred computer instructions are barely enough to move a single character on the computer's display, let alone do something interesting.

But if I have many millions of neurons working together, isn't that like a parallel computer? Not really. Brains operate in parallel and parallel computers operate in parallel, but that's the only thing they have in common. Parallel computers combine many fast computers to work on large problems such as computing tomorrow's weather. To predict the weather you have to compute the physical conditions at many points on the planet. Each computer can work on a different location at the same time. But even though there may be hundreds or even thousands of computers working in parallel, the individual computers still need to perform billions or trillions of steps to accomplish their task. The largest conceivable parallel computer can't do anything useful in one hundred steps, no matter how large or how fast.

Here is an analogy. Suppose I ask you to carry one hundred stone blocks across a desert. You can carry one stone at a time and it takes a million steps to cross the desert. You figure this will take a long time to complete by yourself, so you recruit a hundred workers to do it in parallel. The task now goes a hundred times faster, but it still requires a minimum of a million steps to cross the desert. Hiring more workers— even a thousand workers— wouldn't provide any additional gain. No matter how many workers you hire, the problem cannot be solved in less time than it takes to walk a million steps. The same is true for parallel computers. After a point, adding more processors doesn't make a difference. A computer, no matter how many processors it might have and no matter how fast it runs, cannot "compute" the answer to difficult problems in one hundred steps.

So how can a brain perform difficult tasks in one hundred steps that the largest parallel computer imaginable can't solve in a million or a billion steps? The answer is the brain doesn't "compute" the answers to problems; it retrieves the answers from memory. In essence, the answers were stored in memory a long time ago. It only takes a few steps to retrieve something from memory. Slow neurons are not only fast enough to do this, but they constitute the memory themselves. The entire cortex is a memory system. It isn't a computer at all.
 
this process of inner necessitation excludes any notion of free will
You keep asserting that, and yet °°° and ••• do not suffer revocation of their existence.


I have not only said it, but explained it numerous times. Nobody gets to choose or control what happens at the cellular level, the means and mechanisms of action, yet what happens within 'our' brain, determines how we as conscious entities see the world and how we thing, what we think and what we do.
All you can do is keep asserting this without argument and it's kind of sad.

What is sad is your apparent inability to understand the explanations and argument for incompatibilism regardless of whether I give them or anyone I happen to quote cite or refer to.

So your inability to grasp incompatibilism and the given reasons why compatibilism fails to establish free will in relation determinism is indeed all encompassing.

I could explain again, I could quote again, I could refer to any number of experiments, case studies and arguments from neuroscience by specialists in cognition and motor action....to no purpose.

You would act like nothing was explained and nothing was provided for you.

Believe whatever you like. As you surely must.
 
The brain as a computer is a metaphor. As the above linked article notes, we’ve been using metaphors for thousands of years for the brain, with the latest technology of each era being adopted as a metaphor for how the brain works.

I take no stand on whether a computer is, or could be, conscious. But the point of the above-linked article is that the brain does not really work like a computer at all. That said, I’m not sure how the issue of how the brain actually works really has bearing on free will. I‘m prepared to accept that a computer has a kind of free will and possibly even consciousness, that consciousness could be substrate independent, without endorsing the idea that the brain is a computer or resembles one in any relevant sense.

A metaphor? The brain literally acquires and processes information.

Your brain does not process information … very first clause of the subhead to the article. Later he he explains in depth why this is so. Deal with that, please.

Deal with what? As it is you who quoted ''Your brain does not process information. Your brain is not a computer.'' it implies that you don't believe that the brain processes information or 'computes?'

I did not say the brain is a linear computer like a laptop, it's a parallel information processor....and the article just used the word 'computer' to convey that the brain is an information processing without going into details on the nature of computing in the brain.

You seized the word 'computer' like a lifeline for the failed argument that is compatibilism.

I did that?

You presented a link that stated: ''Your brain does not process information. Your brain is not a computer.'' - so it is fair to assume that this is what you are arguing.

I pointed out that the article used the word 'computer' as a metaphor for an information processor, that the brain does indeed acquire and process information even if its not in the same way as your laptop: that it is a parallel rather than a linear processor.

You seized on a word and made far to much of it.

You could have politely asked ''what does the article mean by the brain is a 'computer'' and it would have been explained...but not accepted, I suspect. Not accepted because it goes against the idea of free will.


I explicity said that whether the brain is a computer or not, or acts like a computer or not, has no bearing on the free will argument. I raise the point about computation only to underscore the fact that you repeatedly make assertions about issues that are contested as if they are already settled fact. It is not a settled fact that the brain is, or is like, a computer — quite the contrary. Just as it’s not an agreed upon fact of what determinism is, as you keep falsely maintaining. It’s not even a settled fact that determinism is true — if the world is entirely quantum, which it appears to be, then determinism is false. The determinism we seem to get at the so-called classical level would then be stasticial.

So now the brain is a parallel processing computer?

No, it’s not.

From the book:

From the dawn of the industrial revolution, people have viewed the brain as some sort of machine. They knew there weren't gears and cogs in the head, but it was the best metaphor they had. Somehow information entered the brain and the brain-machine determined how the body should react. During the computer age, the brain has been viewed as a particular type of machine, the programmable computer. And as we saw in chapter 1, AI researchers have stuck with this view, arguing that their lack of progress is only due to how small and slow computers remain compared to the human brain. Today's computers may be equivalent only to a cockroach brain, they say, but when we make bigger and faster computers they will be as intelligent as humans.

There is a largely ignored problem with this brain-as-computer analogy. Neurons are quite slow compared to the transistors in a computer. A neuron collects inputs from its synapses, and combines these inputs together to decide when to output a spike to other neurons. A typical neuron can do this and reset itself in about five milliseconds (5 ms), or around two hundred times per second. This may seem fast, but a modern silicon-based computer can do one billion operations in a second. This means a basic computer operation is five million times faster than the basic operation in your brain! That is a very, very big difference. So how is it possible that a brain could be faster and more powerful than our fastest digital computers? "No problem," say the brain-as-computer people. "The brain is a parallel computer. It has billions of cells all computing at the same time. This parallelism vastly multiplies the processing power of the biological brain."

I always felt this argument was a fallacy, and a simple thought experiment shows why. It is called the "one hundred–step rule." A human can perform significant tasks in much less time than a second. For example, I could show you a photograph and ask you to determine if there is cat in the image. Your job would be to push a button if there is a cat, but not if you see a bear or a warthog or a turnip. This task is difficult or impossible for a computer to perform today, yet a human can do it reliably in half a second or less. But neurons are slow, so in that half a second, the information entering your brain can only traverse a chain one hundred neurons long. That is, the brain "computes" solutions to problems like this in one hundred steps or fewer, regardless of how many total neurons might be involved. From the time light enters your eye to the time you press the button, a chain no longer than one hundred neurons could be involved. A digital computer attempting to solve the same problem would take billions of steps. One hundred computer instructions are barely enough to move a single character on the computer's display, let alone do something interesting.

But if I have many millions of neurons working together, isn't that like a parallel computer? Not really. Brains operate in parallel and parallel computers operate in parallel, but that's the only thing they have in common. Parallel computers combine many fast computers to work on large problems such as computing tomorrow's weather. To predict the weather you have to compute the physical conditions at many points on the planet. Each computer can work on a different location at the same time. But even though there may be hundreds or even thousands of computers working in parallel, the individual computers still need to perform billions or trillions of steps to accomplish their task. The largest conceivable parallel computer can't do anything useful in one hundred steps, no matter how large or how fast.

Here is an analogy. Suppose I ask you to carry one hundred stone blocks across a desert. You can carry one stone at a time and it takes a million steps to cross the desert. You figure this will take a long time to complete by yourself, so you recruit a hundred workers to do it in parallel. The task now goes a hundred times faster, but it still requires a minimum of a million steps to cross the desert. Hiring more workers— even a thousand workers— wouldn't provide any additional gain. No matter how many workers you hire, the problem cannot be solved in less time than it takes to walk a million steps. The same is true for parallel computers. After a point, adding more processors doesn't make a difference. A computer, no matter how many processors it might have and no matter how fast it runs, cannot "compute" the answer to difficult problems in one hundred steps.

So how can a brain perform difficult tasks in one hundred steps that the largest parallel computer imaginable can't solve in a million or a billion steps? The answer is the brain doesn't "compute" the answers to problems; it retrieves the answers from memory. In essence, the answers were stored in memory a long time ago. It only takes a few steps to retrieve something from memory. Slow neurons are not only fast enough to do this, but they constitute the memory themselves. The entire cortex is a memory system. It isn't a computer at all.


Semantics, whether you say the brain 'computes' or processes information, the brain does in fact acquire and process information in order to respond to the events in the world.

There is no magic element as the director, no homunculus, no will agency to alter outcomes.

The information state of the system determines outcome in any given instance in time

Which is why you may make a decision one moment and regret it the next, why you may say something in the moment only to wish you hadn't said that a second later.

Plus you are too hung up on words. 'Computes' is simply a word used to convey 'a computational system' even though, as I said, it is not like a laptop, desktop or mainframe.


''Is the brain a computer? That is, does neural computation explain cognition? One reason to believe it does is that, much like a computer, the brain functions as an input-output system. Stimuli received as input from sensory systems are processed and a corresponding response is generated. But even among those who do believe the brain computes, there exist various theories about which type of computational system the brain might be. Cognitive scientists on the other side of the debate believe that the brain could not possibly be a computer, for its operation doesn’t follow any pattern of behavior we would expect if it were one. The debate between these two groups continues even today and little progress has been made toward answering the question.


However, researchers at the University of Missouri in St. Louis, Gualtiero Piccinini and Sonya Bahar, say that while the brain is in fact a computer, it is not the kind of computer that traditional computationalists make it out to be''

'In a new paper published in Cognitive Science, the authors argue that the nervous system fulfills four criteria that define computational systems. First, the nervous system is an input-output system. For example, the nervous system takes sensory information such as visual data as input and also generates movement of the muscles as output. Second, the nervous system is functionally organized in specific way such that it has specific capacities, such as generating conscious experience. Third, the brain is a feedback-control system: the nervous system controls an organism’s behavior in response to its environment. Finally, the nervous system processes information: feedback-control can be performed because the brain’s internal states correlate with external states. Systems that fulfill these four criteria are paradigmatic computational systems. But how does the brain satisfy the criteria?''

The authors conclude that if the brain is a computer, but neither analog nor digital, then it must be of it’s own kind, or sui generis. This may have important implications for how we understand artificial intelligence. If the brain differs in computational type from the electronic computer in such a way that the electronic computer can only model brain function, strong artificial intelligence may not be possible. But it may also turn out the differences between electronic and neural computers are small enough that a real-life HAL is on the horizon. ''
 
The determinism we seem to get at the so-called classical level would then be stasticial
So, an unfortunate element of this, while it does not in any way resolve our need for statistical analysis, is as I discussed about "just so determinism" or "fixed statistic determinism"

In my DF universe there are dice rolls. The dwarves MUST take "statistical risks", because the outcome of certain things in the universe for them cannot be known. Fundamentally, this is a quantum interaction from their perspective.

Nonetheless, they are in a deterministic universe: I can tear it all down, run it from the beginning on the same seed, and that dwarf will be right back in that hallway.

I labeled this "just-so determinism", and while this explains why the beings in that and other universe are limited to stochastic modeling and ETERNALLY relegated to acting on their best guesses rather than on perfect predictive models that reduce choice to triviality, it does not actually revoke the determinism.

In this way, a statistically random game like Snakes and Ladders is still potentially deterministic, susceptible to a replay.

The real question here is, does the play ever dip through a decision engine, or does it always just poll from the "dice roll sequence".

Our universe satisfies the requirements of Free Will on account of it not being purely "just-so deterministic" and having player decisions and player statistical available.
I have not only said it, but explainedmade unargued and fallacious assertions about it it numerous times.
FTFY.

Nobody gets to choose or control what happens at the cellular level
Except that we do get to choose exactly the things at the cellularly level that are a product of our choice. So everyone gets to choose and control what happens at the cellular level with regards to some things in the same way that nobody gets to choose what happens at the cellular level with other things.

I choose what happens at the atomic and even subatomic level of an excited laser diode when I decide what happens at the cellular level of all the cells with regards to their release of neurotransmitters that cause the chain reaction that results in the button going down on the laser pointer.

And while it may be the result of my actions, it was not my choice explicitly to be blinded permanently as the UV laser reflects off the mirror into my unprotected eye.

And my having made that post was a choice to control what happens at the cellular level of a number of neurons by creating a cyclic pattern that will burn in over time considering how dangerous it is to play with a UV laser so that my will to never permanently blind myself with a UV laser by doing something unconsidered is more likely to be free, and remains provisionally so.

I didn't choose it explicitly when I was a kid, but I absolutely chose it to be so implicitly nonetheless.

Now I choose it explicitly, on account of explicit consent and request and knowledge.

your latest go-to appears to be the false dichotomy.
 
Whether it is "I will buy this car", or "I will buy this melon", or "I will invite Cindy to the prom", or "I will have the salad for dinner", it is all a matter of a "will" that we choose for ourselves and by ourselves, free of coercion and undue influence.

Nothing is free from inner necessitation. ''I will buy this car'' has a large web of causality, antecedents that bring you to a decision that has no alternative. You buy the car only to realize the next day that you should have chosen the other one.

I'm actually counting on both inner necessitation and outer necessitation to choose my car. What features would I like to have in my new car? Now, what are my actual possibilities here on the car lot? Both inner and outer causes play a role in my decision.

Sure, that is your experience. But below the threshold of your awareness lies the means and mechanisms of what you are experiencing.

You are not aware that what you feel you are doing is decided before it's brought to awareness.

First the inputs, then processing followed by conscious experience. The former unconsciously determines the latter.

Recall Gazzaniga's narrator function;


Michael Gazzaniga
''Our brain is not a unified structure; instead it is composed of several modules that work out their computations separately, in what are called neural networks. These networks can carry out activities largely on their own. The visual network, for example, responds to visual stimulation and is also active during visual imagery—that is, seeing something with your mind’s eye; the motor network can produce movement and is active during imagined movements. Yet even though our brain carries out all these functions in a modular system, we do not feel like a million little robots carrying out their disjointed activities. We feel like one, coherent self with intentions and reasons for what we feel are our unified actions. How can this be?''

''The left-hemisphere interpreter is not only a master of belief creation, but it will stick to its belief system no matter what.
Patients with “reduplicative paramnesia,” because of damage to the brain, believe that there are copies of people or places. In short, they will remember another time and mix it with the present. As a result, they will create seemingly ridiculous, but masterful, stories to uphold what they know to be true due to the erroneous messages their damaged brain is sending their intact interpreter. One such patient believed the New York hospital where she was being treated was actually her home in Maine. When her doctor asked how this could be her home if there were elevators in the hallway, she said, “Doctor, do you know how much it cost me to have those put in?” The interpreter will go to great lengths to make sure the inputs it receives are woven together to make sense—even when it must make great leaps to do so. Of course, these do not appear as“great leaps” to the patient, but rather as clear evidence from the world around him or her.''

''Neuroscience is making inroads into understanding how the circuits and logic of neurons carry out behaviors. We understand more about certain thoughts and behaviors than others. One thing we are certain of is that the ‘‘work’’ in the brain happens before we are consciously aware of our mental struggles. Researchers have, since as early as 1965, advanced our understanding of the fact that much of the work is done at the subconscious level.''

''Goldberg brings his description of frontal dysfunction to life with insightful accounts of clinical cases. These provide a good description of some of the consequences of damage to frontal areas and the disruption and confusion of behavior that often results. Vladimir, for example, is a patient whose frontal lobes were surgically resectioned after a train accident. As a result, he is unable to form a plan, displays an extreme lack of drive and mental rigidity and is unaware of his disorder. In another account, Toby, a highly intelligent man who suffers from attention deficits and possibly a bipolar disorder, displays many of the behavioral features of impaired frontal lobe function including immaturity, poor foresight and impulsive behavior.''

The personal narrative.
''For example, in one study, researchers recorded the brain activity of participants when they raised their arm intentionally, when it was lifted by a pulley, and when it moved in response to a hypnotic suggestion that it was being lifted by a pulley.''
 

You could have politely asked ''what does the article mean by the brain is a 'computer'' and it would have been explained...but not accepted, I suspect. Not accepted because it goes against the idea of free will.



Why would I ask you explain something you evidently know nothing about?

Now you dismiss the articles as playig semantic games. They are not playing semantic games. They are telling you that the brain as a computer is a metaphor, and a failed one at that. They are telling you why the brain is not a computer and does not parallel process. It’s all spelled out.

Finally, you once again inaccurately contend that I dismiss the brain as being a computer because if it were a computer, free will would be disconfirmed. This is nonsense. I have already said, at least twice, that whether the brain is a computer or not makes no difference to compatibilist free will. I have already told you that the reason I bring up the computer issue at all is because of your well-known tendency to state as settled fact ideas that are not settled at all, but in dispute. I have said all this and you ignore or distort it. How is this nothing more than disingenuousness, to put it politely, on your part?
 
The left-hemisphere interpreter is not only a master of belief creation, but it will stick to its belief system no matter what.
This explains many things about you, then. My belief system incorporates doubt, and yours clearly does not. My whole process here has been one of refinement of my understandings of this "responsibility math", mostly by reading and doubting the sufficiency of certain definitions and discussions to fully grasp the underlying framework of arbitrary sequence executions.

This goes back to an issue I notice insofar as humans seem to have two ways of operating in the world and it's really hard to teach someone to use a method that they don't seem well practiced in:

The first is goal-orientation.

In goal orientation someone explicitly looks at what their needs are, and then pass those needs off to a secondary system: what need does it "feel like" I must most focus on right now?additionally, one can add "why?" And "does that actually make sense"?

From there, one gets to ask "how?" And then more "does that make sense?"

Eventually this results in a list of actions and requirements. The proof of that is that I can express this via a piece of paper. Someone else can pick it up and execute on it all the same, once they are done parsing it, compiling it into signals that can be handled mathematically, and executing on those patterns of signals.

Essentially, humans are capable of JIT execution on internally generated scripts derived from goals.

But that's only one of the models.

There's another model of behavior in play in the human condition: The Chinese Room model.

The idea of The Chinese Room is that someone is standing in a room. There are no windows, just a door through which food and water and other comforts are passed, and waste removed, and a camera. And a book, in a closet with a back door that only opens when the front door is locked through which the book is regularly exchanged.

The operation of the room is thus: the distress of the person in the room is noticed. A piece of paper is fed in through the slot in the door. The person finds the page where the markings on the piece of paper are seen, then they utter the PinYin printed next to the markings, which does not correspond to the markings given.

Then something happens and the person gets what they want.

This operation amounts to treating every operation in the universe as a lookup table of past observations of "that worked", and "that didn't".

It completely overlooks the power of "I've never done that nor seen it nor thought about it but (observed structure) Implies (principle of operation) so (this will work)".

Then, one of the beliefs that was planted early in my belief structure was the belief that I ought to doubt my belief structure.

This is a rather rare ability, and rarely leveraged against a belief even so. It might be rather hard to study the dislodging of beliefs, in that regard so I don't think much can be said of the study of it. It is examining something that a vanishing small number of minds have only a handful of opportunities to every operate meaningfully on.

There is even a name for such moments: "Existential Crisis".

Most people fight it, and double down. Some people let their tightly held beliefs open up in that moment and reorient and even shove certain patterns out entirely.
 
Back
Top Bottom