• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

According to Robert Sapolsky, human free will does not exist

As usual, the debate over free will is really a debate over what it could possibly mean to say that someone has free will, although everyone tends to act as if there were some kind of agreed upon meaning for the term. What kind of "freedom" are we talking about? I could make a very strong case for there being a common sense notion of free will that is essentially hardwired into highly social animals such as ourselves. By that, I mean that there are special universal linguistic principles for expressing agency that hold across all human language. Those principles have to do with the way we distinguish animate (agentive) causation from inanimate (generic and instrumental) causation. Agentive noun phrases tend to end up as subjects of sentences, because people are interested in assigning responsibility to agents. We don't blame inanimate objects for hurting us, although we may personify them--treat them as if they were agents--and curse them when they cause us pain.

I don't want to get deeply into the linguistics of agentive and instrumental noun phrases, because that would just bore everyone here and not make a lot of sense to those who aren't specialists in semantic theories. However, I do want to bring up the important issues of responsibility and willful control, because those are the concepts that are essential to any common sense description of free will. The danger of eliminativism--denying the reality of free will--is that people often couple their dismissal with a sense that human agents aren't really responsible for their actions. If they were compelled to always behave in only one way, then what is the point of blaming them for all the bad things they do? Or praising them for the good things?

As a compatibilist, I take the position that we live in a chaotic deterministic environment that our minds sort into three temporal buckets:

1) The past, which is determinate and accessible to episodic memory
2) The future, which is indeterminate and accessible to imagined outcomes
3) The present, which is defined by bodily experiences of sensory input and willful control

All human languages have elaborate rule-governed methods for tense and time reference. All of them have elaborate methods for expressing chains of causation that continuously make reference to the actions of free agents--beings that are responsible for outcomes of events under their control. The concept of free will needs to be defined in terms of the degree to which agents have control over outcomes.

Although I don't always agree with everything Patricia Churchland says about free will, I think she gets it essentially right:

The Big Questions: Do we have freewill?

To begin to update our ideas of free will, I suggest we first shift the debate away from the puzzling metaphysics of causal vacuums to the neurobiology of self-control. The nature of self-control and the ways it can be compromised may be a more fruitful avenue...than trying to force the issue of "freely chosen or not".

Self-control can come in many degrees, shades, and styles. We have little direct control over autonomic functions such as blood pressure, heart rate and digestion, but vastly more control over behaviour that is organised by the cortex of the brain...

Although Churchland isn't a classical compatibilist, I would consider her to be something of a neocompatibilist. She doesn't really spend a lot of time trying to define what free will is, but she does thoroughly understand why people think that it is necessary to treat it as a property that human beings (and other animals) actually have.
 
You are speculating and asserting what you believe regardless of not having any evidence to support your assertions
That the world doesn't go away when you aren't looking at it. Hardly a huge leap.

That's not what I said, suggested or even hinted.


In fact it might be one of the most basic realizations even most 2 year olds master.

I can't help someone who is intractably ignorant of how behavioral systems function in the understanding and analysis of behavioral systems.

The world does not disappear when someone dies. When someone dies, they no longer exist and the world goes on regardless.

Why would you think otherwise?

Do you think that you exist as long as the world exists?

What do you think happens to 'you' if the brain can no longer generate a coherent, conscious experience of 'you' as a self conscious entity?

Are you still present? If so, how?
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.
I won't pretend to know why you are discussing subjective idealism, but I am interested in addressing whether I did step to accuse him of this.

Rather, my point was to discuss how his inability to observe or understand a system that is in some way conscious of its environment or itself does not preclude the consciousness existing as an entity conscious of the things it's awareness describes of its state.

DBT lacking the momentary awareness OF his own awareness of other things does not mean he lacks awareness of other things... It specifically means he lacks "awareness of awareness". The same goes for when he is not aware of the calculator's awareness, generally through some understandable but still unfortunate failing of the ability to empathize with such machines.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.
I won't pretend to know why you are discussing subjective idealism, but I am interested in addressing whether I did step to accuse him of this.

Rather, my point was to discuss how his inability to observe or understand a system that is in some way conscious of its environment or itself does not preclude the consciousness existing as an entity conscious of the things it's awareness describes of its state.

DBT lacking the momentary awareness OF his own awareness of other things does not mean he lacks awareness of other things... It specifically means he lacks "awareness of awareness". The same goes for when he is not aware of the calculator's awareness, generally through some understandable but still unfortunate failing of the ability to empathize with such machines.

I think that you are trying to build an elaborate straw man out of stretched assumptions about what is in DBT's mind. Although you don't mention subjective idealism, your argument comes off as an attempt to accuse him of taking that kind of stance. I see his argument as just a fairly common variant of incompatibilism, which I disagree with. I think I understand it, but to me the flaw is an assumption that free will necessarily implies freedom from causal determinism. That seems to be a widely held interpretation of free will in these debates, and he is far from the only one who takes that position. As a compatibilist, my view is that free will is about freedom to control an outcome, especially in the face of a future that is indeterminate at the point a choice is taken. So I would take the position that his concept of what free will is about is wrong, not that he thinks things disappear when people aren't paying attention.

It is worth remembering that the ancient debate over free will had nothing to do with causal determinism. It had to do with the paradox of an omniscient omnipotent God judging the behavior of creations that it knew for certain would misbehave. God's certain knowledge of the future behavior of humans corresponds to causal determinism in the modern godless variant of the paradox. How can we hold people any more responsible for their actions if they are no more than "moist robots", to use Scott Adam's famously humorous take on the subject:

“Free will is an illusion. Humans are nothing but moist robots.”​


The compatibilist's stance takes free will to be a fully determined process. Even though we are essentially automatons that are unable to step outside of our "programming", we don't know the future. The future is indeterminate and we have control over certain potential outcomes of our actions. If robots accepted responsibility for their actions and blame for their mistakes, they might be capable of modifying their own behavioral strategies to avoid the mistakes. Human beings come equipped with evolution-designed guard rails of that sort, but we don't yet know how to build intelligent machines that are capable of exercising the same kind of learning and control. That is, we can't yet create machines with free will that take responsibility for their actions.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.
I won't pretend to know why you are discussing subjective idealism, but I am interested in addressing whether I did step to accuse him of this.

Rather, my point was to discuss how his inability to observe or understand a system that is in some way conscious of its environment or itself does not preclude the consciousness existing as an entity conscious of the things it's awareness describes of its state.

DBT lacking the momentary awareness OF his own awareness of other things does not mean he lacks awareness of other things... It specifically means he lacks "awareness of awareness". The same goes for when he is not aware of the calculator's awareness, generally through some understandable but still unfortunate failing of the ability to empathize with such machines.

I think that you are trying to build an elaborate straw man out of stretched assumptions about what is in DBT's mind. Although you don't mention subjective idealism, your argument comes off as an attempt to accuse him of taking that kind of stance. I see his argument as just a fairly common variant of incompatibilism, which I disagree with. I think I understand it, but to me the flaw is an assumption that free will necessarily implies freedom from causal determinism. That seems to be a widely held interpretation of free will in these debates, and he is far from the only one who takes that position. As a compatibilist, my view is that free will is about freedom to control an outcome, especially in the face of a future that is indeterminate at the point a choice is taken. So I would take the position that his concept of what free will is about is wrong, not that he thinks things disappear when people aren't paying attention.

It is worth remembering that the ancient debate over free will had nothing to do with causal determinism. It had to do with the paradox of an omniscient omnipotent God judging the behavior of creations that it knew for certain would misbehave. God's certain knowledge of the future behavior of humans corresponds to causal determinism in the modern godless variant of the paradox. How can we hold people any more responsible for their actions if they are no more than "moist robots", to use Scott Adam's famously humorous take on the subject:

“Free will is an illusion. Humans are nothing but moist robots.”​


The compatibilist's stance takes free will to be a fully determined process. Even though we are essentially automatons that are unable to step outside of our "programming", we don't know the future. The future is indeterminate and we have control over certain potential outcomes of our actions. If robots accepted responsibility for their actions and blame for their mistakes, they might be capable of modifying their own behavioral strategies to avoid the mistakes. Human beings come equipped with evolution-designed guard rails of that sort, but we don't yet know how to build intelligent machines that are capable of exercising the same kind of learning and control. That is, we can't yet create machines with free will that take responsibility for their actions.
I think it's inappropriate to say we can't create such machines.

Such machines are BAD at taking responsibility but they can and do, and do so on the basis of a learned heuristic awareness of their mistakes, but 'bad at it' and 'can't do it' are different things, in fact mutually contradictory.
 
I think it's inappropriate to say we can't create such machines.

Such machines are BAD at taking responsibility but they can and do, and do so on the basis of a learned heuristic awareness of their mistakes, but 'bad at it' and 'can't do it' are different things, in fact mutually contradictory.

OK, but this gets us into your hobbyhorse derail about artificial intelligence actually being real intelligence. We've had that debate elsewhere. I think it is inaccurate to say that we can create such machines, and I think you are prone too prone to personifying automatons that simulate intelligent behavior.
 
I think it's inappropriate to say we can't create such machines.

Such machines are BAD at taking responsibility but they can and do, and do so on the basis of a learned heuristic awareness of their mistakes, but 'bad at it' and 'can't do it' are different things, in fact mutually contradictory.

OK, but this gets us into your hobbyhorse derail about artificial intelligence actually being real intelligence. We've had that debate elsewhere. I think it is inaccurate to say that we can create such machines, and I think you are prone too prone to personifying automatons that simulate intelligent behavior.
And in response, I think others are too prone to limiting their understanding of behavior to human systems: you accuse me of anthropomorphism, I accuse you of anthropocentrism, insofar as none of my discussions about automatons actually rely on the notion that biology has anything to do with it; I assign that importance to switches, which we ALL have, and whose interaction is the only thing we have discovered that successfully and precisely directs behavior according to rich interactions with data.

This is important to me because my entire understanding of free will is generalized to behavioral systems, not specifically humans.
 
I think it's inappropriate to say we can't create such machines.

Such machines are BAD at taking responsibility but they can and do, and do so on the basis of a learned heuristic awareness of their mistakes, but 'bad at it' and 'can't do it' are different things, in fact mutually contradictory.

OK, but this gets us into your hobbyhorse derail about artificial intelligence actually being real intelligence. We've had that debate elsewhere. I think it is inaccurate to say that we can create such machines, and I think you are prone too prone to personifying automatons that simulate intelligent behavior.
And in response, I think others are too prone to limiting their understanding of behavior to human systems: you accuse me of anthropomorphism, I accuse you of anthropocentrism, insofar as none of my discussions about automatons actually rely on the notion that biology has anything to do with it; I assign that importance to switches, which we ALL have, and whose interaction is the only thing we have discovered that successfully and precisely directs behavior according to rich interactions with data.

This is important to me because my entire understanding of free will is generalized to behavioral systems, not specifically humans.

Personification is not anthropomorphism. It is the Clever Hans phenomenon applied to computer programs that simulate some aspects of intelligent behavior.
 
I think it's inappropriate to say we can't create such machines.

Such machines are BAD at taking responsibility but they can and do, and do so on the basis of a learned heuristic awareness of their mistakes, but 'bad at it' and 'can't do it' are different things, in fact mutually contradictory.

OK, but this gets us into your hobbyhorse derail about artificial intelligence actually being real intelligence. We've had that debate elsewhere. I think it is inaccurate to say that we can create such machines, and I think you are prone too prone to personifying automatons that simulate intelligent behavior.
And in response, I think others are too prone to limiting their understanding of behavior to human systems: you accuse me of anthropomorphism, I accuse you of anthropocentrism, insofar as none of my discussions about automatons actually rely on the notion that biology has anything to do with it; I assign that importance to switches, which we ALL have, and whose interaction is the only thing we have discovered that successfully and precisely directs behavior according to rich interactions with data.

This is important to me because my entire understanding of free will is generalized to behavioral systems, not specifically humans.

Personification is not anthropomorphism. It is the Clever Hans phenomenon applied to computer programs that simulate some aspects of intelligent behavior.
Bold claim.

You could be inverting the Clever Hans effect with humans, from my perspective: thinking yourself more sophisticated than you are because you simulate some aspects of intelligent behavior.

It happens all the GD time among humans.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.

It's puzzling, I have no idea why Jarhyn would think so. Or given that it has nothing to do with what I have ever said, why he would use it as a means of defense.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.

I haven't seen any evidence that DBT supports Bishop Berkeley's theory of immaterialism or so-called subjective idealism. So far, everything he has posted suggests the opposite.

It's puzzling, I have no idea why Jarhyn would think so. Or given that it has nothing to do with what I have ever said, why he would use it as a means of defense.
It's puzzling that you couldn't read the discussion that happened after that to understand that's not what was being discussed at all.
 
That's not what I said, suggested or even hinted.
Yes, it quite is. You say consciousness that you cease to detect ceases to be consciousness with respect to the awareness that powers behavior.

That makes no sense. It's not related to what I said, or what I provided in regard to the consequences of memory loss. When the brain permanently loses memory function, the senses are working and transmitting information to the brain, but without memory integration the sensory information alone does not permit recognition.

To understand the world around us, including ourselves, requires memory. Memory is the key to coherent conscious experience. Without recognition/memory function, the patient experiences sensations that make no sense, sights that make no sense, sounds, smells, etc that make no sense.

That is not consciousness as it's experienced by a healthy, functional brain.



If you cannot experience your own consciousness of a thing, you have proclaimed that to be not-conscious!

So, quite what you suggested, even if you can't understand how or why.


You miss the point. You are running off into briars and brambles again.

You need to brush up on the basics;

''Memory is an essential cognitive function that permits individuals to acquire, retain, and recover data that defines a person’s identity (Zlotnik and Vansintjan, 2019). Memory is a multifaceted cognitive process that involves different stages:

Working memory is primarily associated with the prefrontal and posterior parietal cortex (Sarnthein et al., 1998; Todd and Marois, 2005). Working memory is not localized to a single brain region, and research suggests that it is an emergent property arising from functional interactions between the prefrontal cortex (PFC) and the rest of the brain (D’Esposito, 2007).

Additionally, lesion studies have provided further confirmation regarding the importance of these regions. These investigations have revealed that impairment in performing phonological working memory tasks can transpire following damage inflicted upon the left hemisphere, particularly on perisylvian language areas (Koenigs et al., 2011). It is common for individuals with lesions affecting regions associated with the phonological loop, such as the left inferior frontal gyrus and superior temporal gyrus, to have difficulty performing verbal working memory tasks. Clinical cases involving patients diagnosed with aphasia and specific language impairments have highlighted challenges related to retaining and manipulating auditory information. For example, those who sustain damage specifically within their left inferior frontal gyrus often struggle with tasks involving phonological rehearsal and verbal working memory activities, and therefore, they tend to perform poorly in tasks that require manipulation or repetition of verbal stimuli (Saffran, 1997; Caplan and Waters, 2005).''

 
That makes no sense. It's not related to what I said, or what I provided in regard to the consequences of memory loss
You have stated multiple times in multiple tacit ways that when you "lose self awareness/self-consciousbess" you "lose awareness/consciousness".

This is exactly assuming something is gone simply because you can no longer directly detect it.
 
That makes no sense. It's not related to what I said, or what I provided in regard to the consequences of memory loss
You have stated multiple times in multiple tacit ways that when you "lose self awareness/self-consciousbess" you "lose awareness/consciousness".

When memory function is irrevocably lost, it's all gone.

'Consciousness' is broad term that encompasses self awareness, being conscious and aware of the world around us, experiencing feelings and thoughts, conscious actions, etc.

As I also said, you may be conscious and focussed on a task, reading, working, watching a movie or whatever without being self conscious, where all of this is gradually lost as memory function breaks down, where the symptoms are related to the degree and severity of memory function failure.. where in the final stages you no longer recognize yourself or anyone around you, where you no longer understand what is happening because nothing is recognizable, memory being the key to recognition and comprehension.

This is exactly assuming something is gone simply because you can no longer directly detect it.

You no longer experience it. That becomes evident in the way a patient can no longer recognize or comprehend the world around them.

Keep in mind that we are not talking about memory impairment, decline, partial loss or short term memory, but in the final stages, the whole range.
 
You no longer experience it
The problem here is that you are failing to understand that there are more than one meanings of "you" being discussed here, and you are conflating them all, unable in the moment to think of the "you" of the kernel reading a report about the thoughts that led you to your latest decisions, and the "you" of the sum total of meat contained by your skin.

When "you" the tiny little piece of your brain loses the function, it is not irrevocably lost unless that part of the brain, all its inputs, are cut off. It doesn't matter that it's outputs are cut off. At the point where the outputs are cut off, there is still a generation of awareness f you-meatbag's memories, they just don't make it to you-small-piece.

This is what I'm saying: there's still consciousness there, and other parts may still be conscious of themselves with their-small-piece sphere of behaviors. Just because it has no mouth to speak to you does not mean it lacks the desire to scream.

Through this entire conversation you have been conflating the two addresses of "you".
 
You no longer experience it
The problem here is that you are failing to understand that there are more than one meanings of "you" being discussed here, and you are conflating them all, unable in the moment to think of the "you" of the kernel reading a report about the thoughts that led you to your latest decisions, and the "you" of the sum total of meat contained by your skin.

No.

'You' in this instance is all the information that makes you who you are as a conscious being, Your species and genetic makeup, brain, where you were born, the influence your family had in your formative years, language, culture, social values, life experiences, education, work, hobbies interests brought together as a coherent whole, which of course is made possible through memory function, information acquisition, retention and integration with inputs that enable self identity, personality, perception, thought, planning and rational action, etc, all the attributes and features of what we call 'consciousness.'

Without memory function as in total loss, that ability, the features, abilities and attributes of consciousness as we experience it are gone.

That is what the evidence, countless case studies, tells us.

When "you" the tiny little piece of your brain loses the function, it is not irrevocably lost unless that part of the brain, all its inputs, are cut off. It doesn't matter that it's outputs are cut off. At the point where the outputs are cut off, there is still a generation of awareness f you-meatbag's memories, they just don't make it to you-small-piece.

This is not about partial memory loss or mere impairment.

The point is that memory function is the very foundation of who we are and how we act.

You must know what happens when memory loss comes to the final stages.


This is what I'm saying: there's still consciousness there, and other parts may still be conscious of themselves with their-small-piece sphere of behaviors. Just because it has no mouth to speak to you does not mean it lacks the desire to scream.

Through this entire conversation you have been conflating the two addresses of "you".

Incomprehensible sensations, shapes, sounds, noises, etc, that make no sense, not knowing who or what you are, no self awareness, unable to function, unable to recognize common objects or people you have known your entire life is not consciousness as you currently experience it
 
'You' in this instance is all the information that makes you who you are as a conscious being
No, I explained what I meant by 'you', and that it's contextual, and showed that there are a couple different kinds. You have been conflating these. If you want to insist on conflating these, we're done.

As it is, my position is that some manner of consciousness exists at all points in the universe ubiquitously, even if it is effectively disjoint (such as the lack of joint between the calculator's experience and your own, the opacity rendered by the disjoint between your neurons and it's memory and state history).

If I were to use the word "you" as you just stated it, it would mean "literally the entire observable universe" since literally the entire observable universe is part of what makes me conscious of the things that I am, which would be dumb.
Incomprehensible sensations, shapes, sounds, noises, etc, that make no sense, not knowing who or what you are, no self awareness, unable to function, unable to recognize common objects or people you have known your entire life is not consciousness as you currently experience it

Yet again, "not consciousness as you currently experience it" is not even actually a sensible statement.

Even the consciousness you have now is not consciousness as you experienced it a second ago, because what we are aware of changes both with the configuration of the thing generating awareness of stuff (so as to change what it CAN be aware of), AND the information being provided from which such awareness of phenomena is assembled.

Consciousness of things, however, exists in both situations: one is consciousness of well ordered data, of many data extracted from a complicated signal, and the other is "of noise". Both are "consciousness", it's just that one isn't very useful.

Yet again, if you can't stop conflating meanings and usages, you are just playing pigeon chess at that point.
 
Last edited:
Incomprehensible sensations, shapes, sounds, noises, etc, that make no sense, not knowing who or what you are, no self awareness, unable to function, unable to recognize common objects or people you have known your entire life is not consciousness as you currently experience it
Yet again, "not consciousness as you currently experience it" is not even actually a sensible statement.

Even the consciousness you have now is not consciousness as you experienced it a second ago, because what we are aware of changes both with the configuration of the thing generating awareness of stuff (so as to change what it CAN be aware of), AND the information being provided from which such awareness of phenomena is assembled.

Consciousness of things, however, exists in both situations: one is consciousness of well ordered data, of many data extracted from a complicated signal, and the other is of noise. Both are "consciousness", it's just that one isn't very useful.
 
Back
Top Bottom