• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

According to Robert Sapolsky, human free will does not exist

Frame halting is technical problem in programming not human psychology.
Nice it's more a physical problem. There is a rate of change and a moment of change, and a momentary state understandable by looking at the thing.

Well, the concept of free will is mostly a mental phenomenon that involves what eliminativists have pejoratively labeled "folk psychology". That's what this thread is about.


My discussion on compatibilism, honestly, mostly happened the last few years where this was well explored, and honestly, I'm just not interested in having the same discussion again if you didn't read that thread. I even told you where you could read about my thoughts on the subject without me re-hashing it.

I'm interested in the topic under discussion in this thread, and I'm specifically interested in the topic of compatibilism, whether that interests you or not. If you don't want to have the same discussion, why bring it up? Just focus on the current topic.


The analogy to software and hardware (which you seemed to not actually read, despite me telling you where it was) is apt, because we are computational systems at our core. We have a momentary state, and we have some incalculable but estimated future state, and we in each moment send stuff on to manage our future state based on our identification in the present of our past states and evaluations of them.

I understand the analogy perfectly well, and I have explained my problems with it elsewhere. Human brains are, of course, analog machines that don't work the same as digital ones. Brains are not generalized computing devices that run programs and store information in digital repositories. There are some fundamental differences where the analogy becomes very unhelpful. Analogies always break down, so they do not make for sound argument. They are useful for teaching new ideas, but I have found your analogies obscure and unhelpful, sorry to say.

My point here has been that this can be directed at our goals, despite the fact that you keep repeating that we can't. I very well can quash or disregard or prevent the return or the first appearance of some proposed goal within the environment of my own experience, and my point in replying to you is merely to say "quit selling such things short".

I never once said that you can't make a useful analogy to explain your perspective. The problem is that it isn't useful. It doesn't really say anything different, except that one can use computer jargon to talk about psychological phenomena. If I knew what you are actually trying to sell such that I found value in it, perhaps I wouldn't sell it so short. Personally, I think you stretch the brain-computer technology to the point where it loses any value.
 
analog machines that don't work the same as digital ones
Maybe different in terms of how much information a single interaction can operate on, but not in terms of the basic function of switching actions, nor is it different in terms of gate mechanics nor contingent process.

That the gates are more continuous and have activation profiles and deferred activation makes the ability to access polar-rotational action more directly, but it doesn't fundamentally change that there are exchanges of values happening between switches, and these are fundamentally going to behave in ways associated with "and", "or", "not", but with the ability to natively handle numerical data with fewer graph connections.

In reality, the function of the machine is not fundamentally different, especially on a "logical" level, especially when in the domain of normal computation, there isn't really a limitation against "floating-point calculation" in digital environments anyway, and quantization errors are just as likely from a slightly imprecise neuron.

The fundamental behavior is the same and no, analogies don't "always" break down. Sometimes they are quite solid through and through, especially when they are more "applications of a homomorphism across two domains."
 
analog machines that don't work the same as digital ones
Maybe different in terms of how much information a single interaction can operate on, but not in terms of the basic function of switching actions, nor is it different in terms of gate mechanics nor contingent process.

That the gates are more continuous and have activation profiles and deferred activation makes the ability to access polar-rotational action more directly, but it doesn't fundamentally change that there are exchanges of values happening between switches, and these are fundamentally going to behave in ways associated with "and", "or", "not", but with the ability to natively handle numerical data with fewer graph connections.

In reality, the function of the machine is not fundamentally different, especially on a "logical" level, especially when in the domain of normal computation, there isn't really a limitation against "floating-point calculation" in digital environments anyway, and quantization errors are just as likely from a slightly imprecise neuron.

The fundamental behavior is the same and no, analogies don't "always" break down. Sometimes they are quite solid through and through, especially when they are more "applications of a homomorphism across two domains."

That's all very nice, but off topic. If you want to discuss the topic of free will, determinism, compatibility, incompatibility, etc., then maybe we can discuss those topics in this thread. I'm not interested in discussing logic gates, floating point calculations, or types of computing devices with you.
 
...
That is, in fact, "account[ing] for the nature of the means and mechanisms of how decisions are made."
No it doesn't. The assumption is that it is 'you doing it' - and that is true on the surface.

Yes, it does. Because "on the surface" is where 'we' live. It is all about us, our bodies, and how those bodies interact with reality. Eliminative materialism literally buys you nothing when it comes to explaining the nature of human cognition. It just denies the obvious.

Simply saying 'it is you' doesn't take the nature of means and mechanisms of your existence and experience into account.

The key point that is either brushed over, ignored or dismissed in the compatibilist description of free will is that 'you' do not choose your own condition....yet it is your non chosen condition that determines what you are, who you are, how you think, what you think and what you do in any given instance in the continuum of time and events as the system evolves (compatibilists acknowledge determinism).


Agents make their choices on the basis of what they know and the outcomes that they desire at the time.
The problem is that the 'you that is doing it' - namely the brain - has no regulative control over its own condition, its neural architecture and whatever is happening at the cellular and network level, lesions forming, brain trauma, chemical imbalances, etc....which of course may be expressed as a 'you' who has cognitive attributes that are not adaptive, willed or wanted'

For instance;

On the neurology of morals
Patients with medial prefrontal lesions often display irresponsible behavior, despite being intellectually unimpaired. But similar lesions occurring in early childhood can also prevent the acquisition of factual knowledge about accepted standards of moral behavior.

Consequently;

''An action’s production by a deterministic process, even when the agent satisfies the conditions on moral responsibility specified by compatibilists, presents no less of a challenge to basic-desert responsibility than does deterministic manipulation by other agents. '

I hope you realize at some point that I am not denying the physical nature of brain activity. Of course, you use the term "brain" to refer to brain activity, since brains themselves don't produce any mental activity if they just sit there. Lesions affect brain activity, hence mental activity. Why should a compatibilist have a problem with that? What do you think "compatibilism" refers to? A denial of causality?

I know that you don't deny the physical nature of brain activity. It's the crucial element of our lack of agency in terms of not having access to the means and mechanisms that generate us as conscious beings and our experience of the world.

The brain does not choose its own neural makeup, does not choose how it functions, it does not choose its strengths and weaknesses. Where for instance, you may be good at maths while someone sucks at maths, not because either of you willed it, but how the cards were dealt.

Which is why ''an action’s production by a deterministic process, even when the agent satisfies the conditions on moral responsibility specified by compatibilists, presents no less of a challenge to basic-desert responsibility than does deterministic manipulation by other agents.''


Cognitive function is an emergent property of physical brain activity, just like other physical systems have emergent properties. If your car gets a flat tire, that affects your ability to drive it. When you teach someone how to drive, you don't need to teach them how to fix a flat tire, although that skill could come in handy. The point is that the transportation function of the system is affected by how well its components interact with each other to produce that function, just as cognitive function depends on how the physical components of the brain interact with each other. Systems have functional properties that depend on their components but are not directly describable or predictable in terms of their components. You can look at trees or forests, but you shouldn't confuse the two. Forests are large ecosystems with special properties, and trees are much smaller ecosystems with different properties.

Of course.


Their choices are causally determined by circumstances. In hindsight, they may judge that an alternative choice would have led to a more desirable consequence, and that's how they learn from experience.

But that's essentially the point. That it is the state and condition (not a matter of choice) of the system, the brain, in any given instance that determines the decision and action taken in that instance....and had the system been in the condition it is going to be in a moments time, we would not have said or done the silly thing we did in the instance of making the blunder, something we may regret the rest of our lives.

That is decision making, but decision making, for the reasons given above, is not governed or regulated by free will.

That depends entirely on how one chooses to construe the concept of "free will". I favor basing definitions on actual usage, and people don't actually use the expression to describe control over goals and desires. Hard determinists do. Ordinary people use the expression to describe control over actions that lead to satisfaction of goals and desires. Those are givens in the decision-making process. What isn't a given is which action to take in pursuit of a goal or desire. Given that we have competing goals and desires, we need to calculate likely outcomes before acting. From the perspective of a hard determinist (or omniscient deity), the results of the calculation are as inevitable as the solution to a mathematical equation. To an agent embedded inside of the unfolding temporal sequence, the result is unknown until the calculation has actually been made.

Having control over our actions is not exempt from the deterministic processes of the underlying mechanisms. The means of control is related to the behaviour that is being controlled, sometimes successfully, sometimes not: addictions, habits, rituals, etc, where there may be no controlling the undesirable addiction or habit without external help, which depending on the problem, may or may not help.


Since they are responsible for the choices they make, their behavior may differ when making similar choices in the future.

Responsibility in the sense that it is ultimately you as a brain that is generating mind and responding to events around you that makes decisions and acts upon them, but the issue of free will lies with the nature of agency, and that agency is not a matter of free will because ultimately you don't get to choose your own condition.

But you do in the course of time, which is what you experience. That's the point. The issue of free will and the nature of agency only make sense from the perspective of someone faced with an uncertain future. The future is inevitable and certain in the imagination of a hard determinist, so free will is neither necessary nor sensible from their perspective. Nothing is uncertain, just a clockwork cascade of causal effects. It is in the nature of agency that free will does exist, because the future is always uncertain to agents. If you want to understand what is 'free' about free will, then you have to understand the nature of agency.

What you don't choose, and can't choose is your genetic makeup and neural architecture, which is the very means of how you think and how you respond to events. Take a group of people and ask them what they would do in a given situation to how different personalities and character comes into play.





''The increments of a normal brain state is not as obvious as direct coercion, a microchip, or a tumor, but the “obviousness” is irrelevant here. Brain states incrementally get to the state they are in one moment at a time. In each moment of that process the brain is in one state, and the specific environment and biological conditions leads to the very next state. Depending on that state, this will cause you to behave in a specific way within an environment (decide in a specific way), in which all of those things that are outside of a person constantly bombard your senses changing your very brain state. The internal dialogue in your mind you have no real control over.''

Right. Nobody disputes that, but you have misconstrued the concept of free will.

I take the definition that is given by compatibilists; to act according to one's will without being forced coerced or unduly influenced.


It is not about having momentary control over one's desires, only one's actions. Free will is about achieving fulfillment of the desires that one has and perhaps regulating the desires that one has in the future. Our hardwired need to learn from experience is what determines the purpose and function of free will.

Our actions are determined prior to conscious awareness, as is the will to act. Our will is not freely chosen, it is fixed by an interaction of inputs and memory function prior to conscious representation in the form of thought and action.

We have the ability to make decisions and act upon them, but this ability has nothing to do with free will because - quite simply - 'action production by deterministic processes' is not a matter of free will.

And that the point where compatibilism fails. It fails because it focusses on external force, coercion and undue influence while downplaying or ignoring the very nature of the underlying means of thought, decision making and action.
 
analog machines that don't work the same as digital ones
Maybe different in terms of how much information a single interaction can operate on, but not in terms of the basic function of switching actions, nor is it different in terms of gate mechanics nor contingent process.

That the gates are more continuous and have activation profiles and deferred activation makes the ability to access polar-rotational action more directly, but it doesn't fundamentally change that there are exchanges of values happening between switches, and these are fundamentally going to behave in ways associated with "and", "or", "not", but with the ability to natively handle numerical data with fewer graph connections.

In reality, the function of the machine is not fundamentally different, especially on a "logical" level, especially when in the domain of normal computation, there isn't really a limitation against "floating-point calculation" in digital environments anyway, and quantization errors are just as likely from a slightly imprecise neuron.

The fundamental behavior is the same and no, analogies don't "always" break down. Sometimes they are quite solid through and through, especially when they are more "applications of a homomorphism across two domains."

That's all very nice, but off topic. If you want to discuss the topic of free will, determinism, compatibility, incompatibility, etc., then maybe we can discuss those topics in this thread. I'm not interested in discussing logic gates, floating point calculations, or types of computing devices with you.
Well, they're the same discussion. I'm not sure how you fail to understand that, but you do somehow fail to understand that the topic of free will is intrinsically and inextricably linked to the processes of computation through gated structures, the contingent mechanisms of the brain, and the conversation of how these come to encode what we will do.

Both you and DBT seem to be unable to look at the underlying mechanism, DBT being unable to look at and see that contingent mechanisms exist and can exercise control over themselves through delayed but recursive action, and you by being unable to look at the important part in the metaphor involving recursive action which is the same in both considerations.

In short, there's no difference, no significant boundary, between a truth table whose output states rearrange said truth table and the mechanism of control necessary to "decide for ourselves what we will do from among whatever freedoms we can identify of our delta-V"

It does not matter that the truth table in question is dominated by "analog" switches or "digital" switches; both compose structures of truth and state transition.
 
Jarhyn, you are right that I fail to understand how your extremely radical reductionist approach helps us explain the concept of free will, and I have no confidence that you understand it either. Recursion is an important computational tool, but you have to know a lot more about the cognitive activity you are trying to explain with that tool before it becomes useful in the explanation. I have worked in the field of AI for a long time and spent a lot of time writing programs employing recursion to describe some aspects of intelligent behavior. So I am aware of how difficult that can be. Trying to actually write a program that mimics complex intelligent behavior is difficult, but the process does tend to make one a little more humble about how easy it must seem to those who lack the experience of having tried. One learns a lot about the extent of our ignorance regarding high level human cognition, because there are so many different factors to take into account. As one of my grad school professors once put it, the process is a bit like trying to weigh a group of frogs that won't sit still in the weighing pan. Your expertise is at the level of the tree, and you are trying to explain the forest. Nobody disputes that it is make up of trees, but there is a lot of other stuff in there, too. Reductionism doesn't help, if you can't describe what it is that you are trying to reduce to lower level components.
 
reductionist
you have to know a lot more about the cognitive activity you are trying to explain with that tool before it becomes useful in the explanation
I would say that you try to make more of things than they are.

That I find recursion to be the root of systemic self awareness is not reductionist, as far as I can see.

It is literally the general description of a system having an awareness of itself.

If you want to look at the more complicated aspect of being aware of oneself in terms of the noun/verb pair "I AM", you need to look at systems with polymorphic language models, systems which can take in and even generate their own tokens, and whose systems form some association from whichever tokens they consume.

But... That isn't important here.

What is important here is the ill-informed statement has been made that "systems cannot have regulatory control over themselves"

That regulatory control comes directly from recursive process, and recursive process naturally involves a delay of frame. Indeed to discuss this at all requires some aspect of reduction to the level of "systems".

DBT seems to think that the delay of frame somehow invalidates either the existence as an entity making decisions prior to the recursive report, AND the ability of the system, through self-observation, to self-regulate (to use those reports to make decisions to issue commands to change the system's own state, regulating it).

The fact that these are things that are trivially observable in software mean these are things that are available modes of action in biological switched systems.

Think of the exercise not as reductionist but similar to the way something being true of some thing in math can often indicate indirectly that something is true in a very different mathematical context: by showing the reality of recurrent regulatory control, we can disprove the statement that such regulatory control is unavailable.

The recurrence is much more complicated in humans, the loop much more difficult to see at times depending on the loop, but it is limited in the same way: a signal must come back around from the output of the system, and this will take time before it is acknowledged. The system could be even more robust than I describe; humans often hesitate at the moment of decision "so to be sure of their decision", and this means that the frames of recursive report about the decision will be considered before the decision is allowed to be carried through by the processes of the mind.

Sometimes this involves targeting or inventing systemic goals.

Do you have any memories of times on which you decided to create an arbitrary goal of some kind, perhaps for the sole sale of creating an arbitrary goal for yourself?
 
Thanks for the discussion, Jarhyn. I have said what I wanted to say in response to you. I think all of us agree here that physical brain activity is essential foundation mental activity, so I'm going to return to the topic of free will and compatibilism, if there are more posts on that subject.
 

Our actions are determined prior to conscious awareness, …]

You keep saying this. Prove it.

Just consider the research and experiments that has been done on the timing between sensory input and conscious response - Libet, Haynes, et al;

Abstract
''Is it possible to predict the freely chosen content of voluntary imagery from prior neural signals? Here we show that the content and strength of future voluntary imagery can be decoded from activity patterns in visual and frontal areas well before participants engage in voluntary imagery. Participants freely chose which of two images to imagine. Using functional magnetic resonance (fMRI) and multi-voxel pattern analysis, we decoded imagery content as far as 11 seconds before the voluntary decision, in visual, frontal and subcortical areas. Decoding in visual areas in addition to perception-imagery generalization suggested that predictive patterns correspond to visual representations. Importantly, activity patterns in the primary visual cortex (V1) from before the decision, predicted future imagery vividness. Our results suggest that the contents and strength of mental imagery are influenced by sensory-like neural representations that emerge spontaneously before volition.''

It must be that way because information that is acquired by the senses cannot be conscious in the initial stages where it is converted to nerve impulses, transmitted to the related regions, Visual, auditory, etc, processed, integrated with memory to enable recognition, comprehension and understanding milliseconds prior to conscious representation of that information, what you see, hear, smell, think, feel, do, etc...which is always after the fact, after light wave stimulates the rods and cones, convert to nerve impulses, propagate to the related region of the brain....



And, even if true sometimes (and it is true sometimes, but not all times, as you would have it in the face of all contradictory evidence, including LIbet) that in no way conflicts with compatibilist free will.

It should always hold true.

You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

Processing must necessarily precede consciousness.

How can it possibly work any other way?

"A lot of the early work in this field was on conscious decision making, but most of the decisions you make aren't based on conscious reasoning," says Pouget. "You don't consciously decide to stop at a red light or steer around an obstacle in the road. Once we started looking at the decisions our brains make without our knowledge, we found that they almost always reach the right decision, given the information they had to work with."

''Subjects in this test performed exactly as if their brains were subconsciously gathering information before reaching a confidence threshold, which was then reported to the conscious mind as a definite, sure answer. The subjects, however, were never aware of the complex computations going on, instead they simply "realized" suddenly that the dots were moving in one direction or another. The characteristics of the underlying computation fit with Pouget's extensive earlier work that suggested the human brain is wired naturally to perform calculations of this kind.''
 
You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

In a way, though, you can "see the object" before nerve impulses transmit information to the brain. Perception is active not passive, since the neural information has to be interpreted or integrated with a mental model of reality. Hence, we often see things that aren't real--like a piece of rope being mistaken for a snake. Illusionists often depend on active perception to perform magic tricks--to make people see phenomena that aren't there. Baseball players see a ball moving through the air, but they have to rush to a location where they expect it to land so that they can catch it. Expectations become a huge factor in how we perceive reality.
 
You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

In a way, though, you can "see the object" before nerve impulses transmit information to the brain. Perception is active not passive, since the neural information has to be interpreted or integrated with a mental model of reality. Hence, we often see things that aren't real--like a piece of rope being mistaken for a snake. Illusionists often depend on active perception to perform magic tricks--to make people see phenomena that aren't there. Baseball players see a ball moving through the air, but they have to rush to a location where they expect it to land so that they can catch it. Expectations become a huge factor in how we perceive reality.
And this is why the low level discussion on the computational nature of awareness was important: we have DBT in here yet again pretending that they solved the hard problem from an armchair with the poorly understood claims of others (re: consistently misinterpreting Libet).

Awareness builds, it doesn't just suddenly happen somewhere. It's self-reference, recursion that "suddenly happens", and that recursive part isn't even directly necessary to the existence of wills and freedoms. As it is, many systems can accomplish it without recursion at all, by massively predicting how things worked elsewhere in a strongly correlated way (see also, how to "flatten" a finite recursive system; it costs a lot more in terms of model size, but it produces the same result).

It isn't "made conscious" magically at some point; the neurons of the eyes are conscious of the signal from the cones and rods; the neurons of the optic nerve are conscious of emerging patterns among those signals, and so on, until that is constructed into awareness of objects.

Like, how can he even claim that it is "made conscious" somehow without knowing what he even means when he says those words?
 
Jumping into the thread.

Nothing cnanges in zero time, and information can not propagate faster than the speed of light. Effect can not precede cause.

Perception and awareness are catch all simplistic terms that describe a complex phsyical neural process.

How does a baseball player 'computer' how to throw a ball from 3rd to 1st base? Or a bird landing on a branch blowing in the wind. How do you know how to toss something into waste paper basket from a distance?

I don't think compute is the best term to describe it. It is learned behavior, our neural net that has evolved.

I have heard it said that in AI on a given input like question it is not possible to localize where in the nural net the answer is formed. The AI leas from inputs, like reading texts.

I don't think it can be described philosophically.
 
I don't think compute is the best term to describe it.
Well, what you're looking at neurons doing is a continuous version of what simple binary switches do. "Compute" is the term.

It does this computation more in the way an op-amp computes an output voltage than the way a binary switch computes an AND, as an analog process.

I think one of the biggest challenges I myself have to this day is thinking about switches that aren't binary? But Shor's algorithm still "calculates" and "computes" in a continuous way, and neurons do the same.

The behavior neurons evolved to is switching against one another. Brains, larger constructions of neurons evolved to produce a hardware state diagram in an "analog" way, but it no less computes than any other hardware does.

What "learning" means here is the reorganization of that hardware either in a purely physical manner (model), or in a logical manner dictated by some standing wave-like behavior within the system (context).

It absolutely can be described philosophically, but the exercise for doing so is pointless, long, and unnecessary for the vast majority of applications in the same way as coding in native machine code is... It's useful for preventing certain bugs and making a compiler and assembler, maybe for hacking and exploiting certain bugs, but not much more. It would essentially involve a non-binary implementation of Verilog (though Verilog supports functional blocks that handle floating point anyway).

The difference I see is that the result can be reconfigured automatically in response to its runtime failure modes.
 
You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

In a way, though, you can "see the object" before nerve impulses transmit information to the brain. Perception is active not passive, since the neural information has to be interpreted or integrated with a mental model of reality. Hence, we often see things that aren't real--like a piece of rope being mistaken for a snake. Illusionists often depend on active perception to perform magic tricks--to make people see phenomena that aren't there. Baseball players see a ball moving through the air, but they have to rush to a location where they expect it to land so that they can catch it. Expectations become a huge factor in how we perceive reality.
And this is why the low level discussion on the computational nature of awareness was important: we have DBT in here yet again pretending that they solved the hard problem from an armchair with the poorly understood claims of others (re: consistently misinterpreting Libet).

Awareness builds, it doesn't just suddenly happen somewhere. It's self-reference, recursion that "suddenly happens", and that recursive part isn't even directly necessary to the existence of wills and freedoms. As it is, many systems can accomplish it without recursion at all, by massively predicting how things worked elsewhere in a strongly correlated way (see also, how to "flatten" a finite recursive system; it costs a lot more in terms of model size, but it produces the same result).

It isn't "made conscious" magically at some point; the neurons of the eyes are conscious of the signal from the cones and rods; the neurons of the optic nerve are conscious of emerging patterns among those signals, and so on, until that is constructed into awareness of objects.

Like, how can he even claim that it is "made conscious" somehow without knowing what he even means when he says those words?

I wish I could say that I got something substantive about your hand waving at neurons and recursion, but I don't see how they relate to the topic of agency or free will. I'm sure that you, like anyone attempting to simulate intelligent behavior, would find recursion a useful programming structure, especially if you were trying to mimic the behavior of neurons. But we are still interested in the nature of free will and how it operates in a chaotically deterministic reality.
 
You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

In a way, though, you can "see the object" before nerve impulses transmit information to the brain. Perception is active not passive, since the neural information has to be interpreted or integrated with a mental model of reality. Hence, we often see things that aren't real--like a piece of rope being mistaken for a snake. Illusionists often depend on active perception to perform magic tricks--to make people see phenomena that aren't there. Baseball players see a ball moving through the air, but they have to rush to a location where they expect it to land so that they can catch it. Expectations become a huge factor in how we perceive reality.
And this is why the low level discussion on the computational nature of awareness was important: we have DBT in here yet again pretending that they solved the hard problem from an armchair with the poorly understood claims of others (re: consistently misinterpreting Libet).

Awareness builds, it doesn't just suddenly happen somewhere. It's self-reference, recursion that "suddenly happens", and that recursive part isn't even directly necessary to the existence of wills and freedoms. As it is, many systems can accomplish it without recursion at all, by massively predicting how things worked elsewhere in a strongly correlated way (see also, how to "flatten" a finite recursive system; it costs a lot more in terms of model size, but it produces the same result).

It isn't "made conscious" magically at some point; the neurons of the eyes are conscious of the signal from the cones and rods; the neurons of the optic nerve are conscious of emerging patterns among those signals, and so on, until that is constructed into awareness of objects.

Like, how can he even claim that it is "made conscious" somehow without knowing what he even means when he says those words?

I wish I could say that I got something substantive about your hand waving at neurons and recursion, but I don't see how they relate to the topic of agency or free will. I'm sure that you, like anyone attempting to simulate intelligent behavior, would find recursion a useful programming structure, especially if you were trying to mimic the behavior of neurons. But we are still interested in the nature of free will and how it operates in a chaotically deterministic reality.
And to that I say "see also the discussion on how all 'algorithms' are 'wills', all available executable paths of such algorithms are their freedoms, and free will is the lack of environmental precursors that prevent the travel of the execution paths to their goal states"

I'm not talking specifically about programmatic recursion so much as state tree or graph recursion, of which the most simple form is the "memory cell", a pair of not gates recursing on its own output state. See also RNNs and their equivalent, much lager "flatened" version that does the same thing through massive re-work and inefficiency.

I talk about these because yet again, DBT is indirectly denying that state machines can control their own states through cyclic (or emulated cyclic) action.
 
You cannot see an object before the eyes acquire the information from the external world, you cannot see the object before nerve impulses transmit the information to the brain and that information is processed and made conscious.

In a way, though, you can "see the object" before nerve impulses transmit information to the brain. Perception is active not passive, since the neural information has to be interpreted or integrated with a mental model of reality.

We cannot be conscious of light waves, pressure waves, airborne molecules, etc, before their information content is transmitted to the brain, not as light or air pressure or molecules, but nerve impulses being interpreted as objects, sounds and smells. They are not sights, sounds, smells prior to processing, which is why there must be milliseconds of lag between inputs and conscious experience of that information.....and that is what the research finds.

Hence, we often see things that aren't real--like a piece of rope being mistaken for a snake. Illusionists often depend on active perception to perform magic tricks--to make people see phenomena that aren't there. Baseball players see a ball moving through the air, but they have to rush to a location where they expect it to land so that they can catch it. Expectations become a huge factor in how we perceive reality.

Errors such as mistaking a rope for a snake or some movement in the shadows as a person doesn't happen on a different time scale, the inputs are the same but the brain simply misinterprets the information, which is corrected when more information is acquired, 'oh it's just a piece of rope.'


''Memory is perhaps the most extraordinary phenomenon in the natural world. Every person’s brain holds literally millions of bits of information in long-term storage. This vast memory store includes our extensive vocabulary and knowledge of language; the tremendous and unique variety of facts we’ve amassed; all the skills we’ve learned, from walking and talking to musical and athletic performance; many of the emotions we feel; and the continuous sensations, feelings, and understandings of the world we term “consciousness.” And we routinely access this tremendous volume of data in the blink of an eye. Without memory there can be no mind as we understand it.''
 
there must be milliseconds of lag between inputs and conscious experience of that information
@Copernicus

And this is yet again why I bring up recursion, and why consciousness is different from it: DBT is pinning the awareness on the recursion I've been talking about here. He is actively defining "conscious" as the recursion rather than the actual process by which awareness functions, making a big deal of the loop (which as you note, Copernicus, is NOT necessary), insofar as he wants to say the recursive report is what makes up the core of self, when as others and I have pointed out, it's the processing part that is the "awareness".

ONLY by learning the specifics of how syntactic structures and outputs are arranged in large scale systems can you start to understand why DBT is wrong; only this way can you see why it is the analysis, not the re-evaluation of previous cycles, that creates the wills. As we can see there is no "unflattened" recursion happening in a modern AI large language model. They are all forcibly flattened, and yet they still have "awareness of their environment" and still "generate and hold wills which can be differentiated as to source".

This is perhaps adjacent to the slightly mis-informed version of panpsychism wherein people say that the information you receive is the consciousness, and the brain is just a receiver for it, though I would say it's more that it is the contingent mechanisms that turn simple awareness of sensory switches into more complicated statements derivable from that signal through some manner of computation regardless of whether it is internal or external sensation.

As Pood and I keep saying, I am the thing making the decision to be reported to myself some time in the future; the recursion allows me to recognize my own existence for what it is, it improves and increases my awareness, but it does not originate that which I am aware of, only my "awareness of awareness".
 
In computer science the term is finite automata.

A finite automaton (FA) is a simple idealized machine used to recognize patterns within input taken from some character set (or alphabet) C. The job of an FA is to accept or reject an input depending on whether the pattern defined by the FA occurs in the input. A finite automaton consists of: a finite set S of N states.


Scroll down to the state diagrams.



And Mealy vs Moore machines



And Fuzzy Logic. It is used in applications.

Fuzzy logic is an approach to variable processing that allows for multiple possible truth values to be processed through the same variable. Fuzzy logic attempts to solve problems with an open, imprecise spectrum of data and heuristics that makes it possible to obtain an array of accurate conclusions.


In CS there are classes of problems that can not be solved by logic trees and finite state machines.

The universal machine is a Turing Machine.
 
In computer science the term is finite automata.

A finite automaton (FA) is a simple idealized machine used to recognize patterns within input taken from some character set (or alphabet) C. The job of an FA is to accept or reject an input depending on whether the pattern defined by the FA occurs in the input. A finite automaton consists of: a finite set S of N states.


Scroll down to the state diagrams.



And Mealy vs Moore machines



And Fuzzy Logic. It is used in applications.

Fuzzy logic is an approach to variable processing that allows for multiple possible truth values to be processed through the same variable. Fuzzy logic attempts to solve problems with an open, imprecise spectrum of data and heuristics that makes it possible to obtain an array of accurate conclusions.


In CS there are classes of problems that can not be solved by logic trees and finite state machines.

The universal machine is a Turing Machine.
Well they can't be solved by binary state machines.

Either way, it's interesting that modern LLMs have internal turning machine emulations and emulate a tiring machine on "messy" natural language input such that they can execute arbitrary and generalized instructions, and generate the same.

To me, the thing in any large scale system that could be considered as "the magic parts" are parts that encompass the large scale polymorphic language model, and it's training cycle. As long as the result can generate algorithmic descriptions and push metaphorical (literal?) microcode updates to ancillary state machines (I'm not sure "finite" applies here seeing as neurons can do arbitrary precision floating point calculation, as can a Verilog node), the system is complete enough to me to be said to contain a "philosophically meaningful agent".

It's fundamentally the process of generating and executing algorithms (Turing systems, IOW) that matter most to me in this discussion, which is the task of the generalized large language model within any given system, because this is the thing that takes goal vectors and state vectors and other inputs and hammers those into response tokens: it is the part capable of acknowledging the "whole self".
 
Back
Top Bottom