• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

There isn't really a 'freewill problem'.

No. What I meant is that there is nothing special that occurs when we press the left button instead of the right. There is no specific shift in how the mind works. There is no point of decision.

So you say. Yet QM says differently.
The brain is not a quatum computer.

Maybe not, but all the brain would need are quantum processes, a very realistic possibility.
 
The brain is not a quatum computer.

Maybe not, but all the brain would need are quantum processes, a very realistic possibility.

How would that help build a case for free will? Can you explain?

First of all, the state of QM is indeterministic. This is necessary for the free will I argue for. And since there is an observer that reports and experiences determining/making the choice, it would seem very reasonable to believe that it is the observer that actually makes the selection.
 
I think his point, that I wholeheartedly don't agree with, is that all of this forethought, modification is all entirely determined - it appears free but that's just an illusion. There's a niced passage in a Tom Robbins Novel that sums it up:

Tom said:
For Christmas that year, Julian gave Sissy a miniature Tyrolean
village. The craftsmanship was remarkable.

There was a tiny cathedral whose stained-glass windows made fruit salad
of sunlight. There was a plaza and ein Biergarten. The Biergarten got
quite noisy on Saturday nights. There was a bakery that smelled always
of hot bread and strudel. There was a town hall and a police station,
with cutaway sections that revealed standard amounts of red tape and
corruption.

There were little Tyroleans in leather britches, intricately stitched,
and, beneath the britches, genitalia of equally fine workmanship. There
were ski shops and many other interesting things, including an
orphanage.

The orphanage was designed to catch fire and burn down every Christmas
Eye. Orphans would dash into the snow with their nightgowns blazing.
Terrible.

Around the second week of January, a fire inspector would come and poke
through the ruins, muttering, "If they had only listened to me, those
children would be alive today."

See his point?

I suppose I missed the part where bilby was arguing for the existence of god?
 
Can someone explain manners and a lack of free will? You have young children and older children. Young children are dumb when it comes to social etiquette. Why is it, when taught about it, that they change their behavior. How is a lack of free will not interfering with such a substantial change in behavior?

Take this chart which is a basic chart showing proper social behavior for a child and how it improves. I'm supposed to believe there is no free will when children, who are taught to behave, behave and act more socially appropriate? That the bump in the graph is meaningless, in the context of being taught to act in certain manners?

View attachment 13955

Agreed. I always have difficulty putting it into words, but as far as I'm concerned, learning can't occur without free will. Learning, adaptation to stimuli, requires some element of both forethought and choice. If we don't have free will, we can't learn, we can't change.

Robotics and AI work might introduce some hesitancy here... but it doesn't seem so to me. I think free will would necessarily be correlated with intelligence and sentience (maybe I mean sapience?). An insect has less free will than a human. A dog has more free will than a snake. As AI continues to develop, I expect that at some point there will be a line that gets crossed, and the artificial entity gains some element of free will.

Lots of speculation, but it all ties back to whether or not choices are really choices, or whether it's all a cosmic hoax.

- - - Updated - - -

Can someone explain manners and a lack of free will? You have young children and older children. Young children are dumb when it comes to social etiquette. Why is it, when taught about it, that they change their behavior. How is a lack of free will not interfering with such a substantial change in behavior?

You can train a dog too. :)

Dogs also have some degree of free will. Perhaps not as much as humans, but certainly not zero.
 
So far as I can tell... those idiots are the people who argue for the side of determinism. I've always thought it was a stupid interpretation of what free will means, but I've lost track of the number of times that (or something substantially similar) has been tossed out as an argument for why we don't have free will.

Alright, fair enough, but the way you phrased the bit I responded to suggested, and in fact still suggests to me, that you essentially agreed with this view of free will.
EB

Hmm. I can see how you might get that from that sentence in isolation... but I thought the remainder of my post did a pretty good job of clarifying my position. How could I have explained the remainder differently to prevent this miscommunication?
 
It's not a question of whether the consequences are pleasant or not. It's a question of whether your hypothesis is supported by observation, and the degree of complexly nested rube-goldberg workarounds required in order to make determinism true.

A "decision point" is a point at which there is more than one possible outcome - under the MWI, it is a point at which universes divide.

I have no use for the concept of "decision", and didn't use it.
Did you not choose to write that post? Did it happen without your consent, and you helpless to stop it from occurring?

- - - Updated - - -

Or at least it was until neurobiologists realised that the brain is stochastic not deterministic...

Oh, I am fully aware that it would have been impossible to predict the details this inevitable event; and that it is equally only possible to influence the probability that people would agree with me - the audience is far to complex for certainty. That's not a reason to go positing weird hypotheses such as 'choice', 'will' or even 'free will', though.

"Could not have done otherwise" is completely at odd (pun intended) with a stochastic brain, let alone a stochastic universe.
 
Can someone explain manners and a lack of free will? You have young children and older children. Young children are dumb when it comes to social etiquette. Why is it, when taught about it, that they change their behavior. How is a lack of free will not interfering with such a substantial change in behavior?

Take this chart which is a basic chart showing proper social behavior for a child and how it improves. I'm supposed to believe there is no free will when children, who are taught to behave, behave and act more socially appropriate? That the bump in the graph is meaningless, in the context of being taught to act in certain manners?

View attachment 13955
You can train them because training adds deterministic inputs which change their behavior by changing their programming.

You have to train them because, according to determinists, you don't have a choice.

Let's take this as given. By what mechanism are you choosing to attempt to change their behavior? By what process are you deciding that a different behavior is more acceptable? By what method are you even selecting what constitutes acceptable and unacceptable?

- - - Updated - - -

Stochastic processes are not inevitable until they have concluded. I'm happy to dump will and freewill, but you are going to struggle to sell rejecting ideas like choice, especially in a system that is in the business of predicting and explaining itself.

However, I'm keen to see the argument.

I am sorry to disappoint, but I think we are in agreement, but talking past each other.

In my thought experiment in which we accept, ad argumentum, the Many Worlds Interpretation, reality is a static block multiverse. No choice is possible, insofar as all possibilities actually occur. 'Choice' is real only from the POV of a single thread of this multiverse - and so from the POV of any intelligence inhabiting it, the universe is Stochastic. A Tropical Cyclone could make landfall anywhere on a thousand km or more stretch of coast; Under MWI, it actually does make landfall at all of the possible locations, but in different universes. From the POV of a single thread, the Cyclone 'chooses' a place to make landfall. But that 'choice' is a result of the action of the components of the Cyclone upon each other, and of the rest of the environment on those components, all interacting in ways that defy prediction - but which can be described probabilistically. No 'thought' or 'intelligence' on the part of the Cyclone is assumed.

Complex systems constantly make 'choices' (as viewed from a single 'strand' of the multiverse), but not in the sense of the word that advocates of 'free will' use it. Whether or not the other 'strands' exist is a whole other question, and I suspect is also unimportant - the difference between multiple universes which cannot communicate with one another after being spawned at a decision point, and a single universe where the other possibilities do not get realized once a decision point is passed, is nil, from our perspective in any given strand.

Again... how is this a more parsimonious explanation than that choice actually exists?

- - - Updated - - -

You have to train them because, according to determinists, you don't have a choice.
Don't have a choice or lean towards the path of least resistance?

Wiploc make joke.

I'm not a determinist myself, but a determinist would say that if you something, then you had to do it.

I did not get that joke. Perhaps some [Devil's Advocate][/Devil's Advocate] tags would be helpful?
 
I'm pretty sure the humans as articulating agents could make similar arguments for frogs, leopards, whales, catfish,. .... and bacteria. They don't because doing so would defeat their claims of uniqueness.

I for one am not claiming uniqueness. My view of what constitutes bounded free will exists for any entity that has cognition - including dogs and cats, horses and frogs, and even robots. But the degree of freedom and will varies in relation to intelligence and sapience. Dogs have agency... not nearly as much agency as a human, but more than a spider. Some animals probably have near zero agency. Rocks have no agency at all. And some animals probably have more agency than we give them credit for ;).
 
Dogs also have some degree of free will. Perhaps not as much as humans, but certainly not zero.

That's ok, but it depends what you mean by free will.

I am happy with 'degrees of freedom' and 'choices' (many machines and systems have and make them respectively) and 'agency' (a thermostat has a little). I am also not ultimately averse to calling something 'free will', so long as my listener knows what version of it I am talking about, that we are meat robots who operate under either a fully determined universe or one which has randomness chucked into the mix, and are not 'really' free at all, in fact we live our lives mainly under constraints (something which imo is often under-emphasised in the hunt for the degrees of freedom thing).

My main reservation is that if we say to someone that they do have free will, it might be fudged, to imply that we have the illusory type of free will that we probably don't. It's a loaded term in other words. Loaded with retributive urges for one thing.
 
How would that help build a case for free will? Can you explain?

First of all, the state of QM is indeterministic. This is necessary for the free will I argue for. And since there is an observer that reports and experiences determining/making the choice, it would seem very reasonable to believe that it is the observer that actually makes the selection.
We have been through this so many times before... it is complete baloney.
Indeterminism doesnt give you free will, only indeterminism. You cant steer it.
your hogwash about an observer has been thouroughly delt with by bilby.
Last: even if ”decisions” would performed in quantum processes those decisions are so tiny parts of the brainfunctions that there will be no resemblancesto the high level decisions we perceive us make.
 
How would that help build a case for free will? Can you explain?

First of all, the state of QM is indeterministic. This is necessary for the free will I argue for. And since there is an observer that reports and experiences determining/making the choice, it would seem very reasonable to believe that it is the observer that actually makes the selection.
We have been through this so many times before... it is complete baloney.
Indeterminism doesnt give you free will, only indeterminism. You cant steer it.
your hogwash about an observer has been thouroughly delt with by bilby.
Last: even if ”decisions” would performed in quantum processes those decisions are so tiny parts of the brainfunctions that there will be no resemblancesto the high level decisions we perceive us make.

Look at the research done by Wang et al using quantum probability theory to BETTER explain decision making (whether QM is actually behind it or not).

And, I said that indeterminism allows for free will; it does not necessarily give free will.
 
And, I said that indeterminism allows for free will.

Even that's debatable. Arguably the very last thing a system supposedly exercising its free will could do with is a curve ball out of left field. Hope I've got that baseball analogy correct. :)

I understand that is is often felt that indeterminism facilitates at least the possibility of free will, but sometimes I think this is just a hangover from thinking that the game is determinism versus free will.

It's sort of, in a way, on a par with saying that more choices equals more free will. But that's a slightly different issue.
 
I'm pretty sure the humans as articulating agents could make similar arguments for frogs, leopards, whales, catfish,. .... and bacteria. They don't because doing so would defeat their claims of uniqueness.

I for one am not claiming uniqueness. My view of what constitutes bounded free will exists for any entity that has cognition - including dogs and cats, horses and frogs, and even robots. But the degree of freedom and will varies in relation to intelligence and sapience. Dogs have agency... not nearly as much agency as a human, but more than a spider. Some animals probably have near zero agency. Rocks have no agency at all. And some animals probably have more agency than we give them credit for ;).

I doubt that our human empathy works very well when projected on other species, especially for species that are very different from our own. However, it is interesting that you attempt to quantify the degree of "freedom" on a scale of intelligence. There is something to that intuition, because the ability to project future outcomes is what gives us so many alternative paths to resolving goal conflicts and fulfilling our needs and desires. We gain more freedom of choice by expanding the options available to us. That allows us also to justify our choices, i.e. to explain why we chose to do what we did. And, as human beings, it is a facile assumption to believe that our futures are filled with a greater number of choices than those of animals we deem to have less ability to predict future outcomes.

And that is certainly why something as unpredictable as QM events has nothing at all to do with free will. How do we praise or blame decisions based on quantum fluctuations? If QM had anything to do with free will, then our behavior would be utterly unpredictable. We wouldn't ever bother to try to explain our actions. The question "Why?" would be pointless and meaningless. Nobody could be praised or blamed for anything they did.
 
I'm pretty sure the humans as articulating agents could make similar arguments for frogs, leopards, whales, catfish,. .... and bacteria. They don't because doing so would defeat their claims of uniqueness.

I for one am not claiming uniqueness. My view of what constitutes bounded free will exists for any entity that has cognition - including dogs and cats, horses and frogs, and even robots. But the degree of freedom and will varies in relation to intelligence and sapience. Dogs have agency... not nearly as much agency as a human, but more than a spider. Some animals probably have near zero agency. Rocks have no agency at all. And some animals probably have more agency than we give them credit for ;).

I doubt that our human empathy works very well when projected on other species, especially for species that are very different from our own. However, it is interesting that you attempt to quantify the degree of "freedom" on a scale of intelligence. There is something to that intuition, because the ability to project future outcomes is what gives us so many alternative paths to resolving goal conflicts and fulfilling our needs and desires. We gain more freedom of choice by expanding the options available to us. That allows us also to justify our choices, i.e. to explain why we chose to do what we did. And, as human beings, it is a facile assumption to believe that our futures are filled with a greater number of choices than those of animals we deem to have less ability to predict future outcomes.

And that is certainly why something as unpredictable as QM events has nothing at all to do with free will. How do we praise or blame decisions based on quantum fluctuations? If QM had anything to do with free will, then our behavior would be utterly unpredictable. We wouldn't ever bother to try to explain our actions. The question "Why?" would be pointless and meaningless. Nobody could be praised or blamed for anything they did.

However, as the underpinning of stochastic processes in the brain, it certainly could play an active role in processing strategies. Not that it would give you any more free will of course. It's just looking in the wrong place.
 
I doubt that our human empathy works very well when projected on other species, especially for species that are very different from our own. However, it is interesting that you attempt to quantify the degree of "freedom" on a scale of intelligence. There is something to that intuition, because the ability to project future outcomes is what gives us so many alternative paths to resolving goal conflicts and fulfilling our needs and desires. We gain more freedom of choice by expanding the options available to us. That allows us also to justify our choices, i.e. to explain why we chose to do what we did. And, as human beings, it is a facile assumption to believe that our futures are filled with a greater number of choices than those of animals we deem to have less ability to predict future outcomes.

And that is certainly why something as unpredictable as QM events has nothing at all to do with free will. How do we praise or blame decisions based on quantum fluctuations? If QM had anything to do with free will, then our behavior would be utterly unpredictable. We wouldn't ever bother to try to explain our actions. The question "Why?" would be pointless and meaningless. Nobody could be praised or blamed for anything they did.

However, as the underpinning of stochastic processes in the brain, it certainly could play an active role in processing strategies. Not that it would give you any more free will of course. It's just looking in the wrong place.

Just as random mutations play a role in evolution; random (or pseudo-random) processes in the brain might possibly aid in creative problem solving.
 
I'm pretty sure the humans as articulating agents could make similar arguments for frogs, leopards, whales, catfish,. .... and bacteria. They don't because doing so would defeat their claims of uniqueness.

I for one am not claiming uniqueness. My view of what constitutes bounded free will exists for any entity that has cognition - including dogs and cats, horses and frogs, and even robots. But the degree of freedom and will varies in relation to intelligence and sapience. Dogs have agency... not nearly as much agency as a human, but more than a spider. Some animals probably have near zero agency. Rocks have no agency at all. And some animals probably have more agency than we give them credit for ;).

Somebody wrote more cells more agency. ...and the winner is Sperm Whale. Or is it all about relative brain to other body mass? Then perhaps it is the Dolphin (mammal).

More choice per unit biomass life agency? Or would that be more choice over proportion of biomass. Somehow agency isn't helping with setting up pentultaltment free will. How do we parse cognitive? Sapience? Can we compare human visual sapience with dog odor sapience or Dolphin acoustic sapience?

No. I think you are claiming uniqueness, but, you are doing so with blinders on.

Forgive me. All I wanted to do was use human capacity for articulation as a vehicle for expressing other species capacity.
 
Let's take this as given. By what mechanism are you choosing to attempt to change their behavior?

Just as an example, you might say, "Don't put your elbows on the table."



By what process are you deciding that a different behavior is more acceptable?

Uh, let's see.... Am I still speaking for the determinists? If so, then I am programmed to to want the kid to put his elbows on the table. This is a result of the training I had when I was a kid, so I don't have any choice in the matter



By what method are you even selecting what constitutes acceptable and unacceptable?

Again--and let me point out that I myself am a free willie, not a determinist--that would be a result of my programming.




- - - Updated - - -

Stochastic processes are not inevitable until they have concluded. I'm happy to dump will and freewill, but you are going to struggle to sell rejecting ideas like choice, especially in a system that is in the business of predicting and explaining itself.

I think we're working at cross purposes. The suggestion was made that determinism would make learning impossible, that it would make change impossible. If that were true, nobody could program a computer.

...


You have to train them because, according to determinists, you don't have a choice.
Don't have a choice or lean towards the path of least resistance?

Wiploc make joke.

I'm not a determinist myself, but a determinist would say that if you something, then you had to do it.

I did not get that joke. Perhaps some [Devil's Advocate][/Devil's Advocate] tags would be helpful?

[joke]

Free Willie: "You determinists. If behavior is determined, we don't have a choice. So, if we have no choice, what is the point of putting people in prison?"

Determinist: "If we don't have a choice, then we can't choose not to put them in prison. If the perp did then crime because she didn't have a choice, then we can lock her up because we don't have a choice."

[/joke]

You are not required to think that's funny. In fact, you may not even have a choice.
 
Alrighty folks, this is going to get complicated. Let's talk about a response model. Of course, this is only a model and reality is more complex. But we can contrast that response model to a "thinking" model that incorporates indeterminacy and randomness, and there we should begin to see some of the differences that we would colloquially call "free will" or "agency".

Response Model

In this model, the entity has a set of perceptors - ways to perceive the world around them. This can be eyeballs or cameras, piezo electric crystals or hair follicles. They can come in all sorts of shapes and sizes, and can perceive any number of different types of events. For simplicity, let's consider an entity that has a visual perceptor.

The entity also has a variety of physical responses - things that the entity's physical form can do. They can blink or close shutters on their camera, they can recoil from physical stimulus, they can move appendages. For simplicity, let's consider an entity that can increase and decrease the aperture for their visual perceptor.

Another element needed in this model is a system of measurement - a gauge that gives some indication of "good" and "bad" in terms of the experience the entity interprets from the perceptions. So for example, a human might interpret very bright light directly into their eyes as unpleasant, something that can damage their vision. A robot might interpret very bright light as "bad", something that can damage the camera receptors.

So in the very simplistic example here, we have an entity that experiences an event via a perceptor, gauges that experience as good/bad, and reacts with a physical response. Now, whether this is a disemodied eyeball or an automated camera, we have a nearly deterministic algorithm involved. The entity can react to stimulus by increasing or decreasing the amount of light entering the perceptor. This is response to stimulus. At this point, the entity is perfectly predictable.

The fundamental nature of this entity's process is effectively "If A then B".

Now let's add a memory function. Let's assume the entity has a way to store past experiences, and to compare the current experience to past experiences and select the response that "best fits" the current experience based on what it has learned. This is no longer perfectly deterministic. It's close, but not exact. Some degree of uncertainty has entered the system at this point. The reaction now depends not only on the current experience, but also on what other experiences the entity has had. This makes it less predictable. Unless we know all of the experiences that the entity has had, as well as the action taken in response to each experience, we can't perfectly predict a future reaction. It also means that as soon as you have more than one entity, it gets much more complicated to predict the general behavior of this type of entity in response to stimulus.

The fundamental nature of this entity's process is effectively "If like A then B".

Thinking Model

Here's the first major divergence in this approach. When we talk about "thinking"in this context, we're talking about forecasting - extrapolation, hypothesizing, and imagining. This is where we grant the entity the ability to take past experiences, project them with differences, and make a guess about what the best reaction to that future experience would be. This requires the entity to be able to look at past experiences, and categorize or cluster those experiences by things that were similar and things that were different - how much alike those past experience were to each other, and how many "types" of experiences they've had. Clusters of experiences will form over time, but they won't be the same clusters for each entity.


Indeterminate Bounded Thinking Model

Now let's assume that there's a small degree of randomness involved - enough that it's explainable by quantum fluctuations. Sometimes the electron in that neuron jogs left instead of right. Also assume that the entity doesn't consult every possible experience, or even every possible cluster of experiences - they're going to consult "enough" experiences and cluster centroids to be able to get "sufficient" fit for the current experience. To translate, they only process until the find something that is close enough to the current experience to merit handing off the the assigned reaction. In this case, not only is the behavior not perfectly predictable at the entity level, it's not predictable at the aggregate level either. But we're working with a pretty simple entity here - visual stimuli only, with very limited responses available.

Go ahead and scale that up, to include all of the types of perceptors that humans have, in all their variability. And include all of the different types of physical responses available to us. And include the capacity for knowledge, and the inherent pattern-finding processes. Stick that all together, and you now have a situation where the human entity can reference very complex past experience clusters, with a very small element of randomization involved... and become definitively non-deterministic in nature. Choices are being made, both in the moment of the experience and in anticipation of an experience. And these are very real choices - there is a framework for how to make a choice, but the actual choice made is subject to randomization, comparison, and a valuation of similarity.

We can splice this six ways from sunday... but at the end of the day, we've got an inherently non-deterministic entity, with a very complex and evolving set of perception-response clusters, for whom even perfect knowledge of initial conditions doesn't guarantee predictability of final state. This is an entity that inherently makes forecast estimates of best outcomes based on assumptions of inputs prior to the occurrence of those input events.

If that doesn't qualify as agency and choice, in short free will, then there's no discussion. At that point, at least one side of this argument is engaged in belief-based argumentation. Possibly both sides. ;)


ETA: IIRC, AI is at the point of learning machines - algorithms that can incorporate new experiences, and form response patterns based on the similarity of a current experience to past experiences. But we're not yet at the stage of forecasting - AI can't yet project past experiences onto a hypothetical future experience and determine an appropriate response. And AI at this point has limited perception capability, limited storage capacity, and limited pattern recognition ability. They're all getting better, and I expect to see fairly robust AI developing the capacity for agency and choice within my lifetime. We're on the right track, it's down to processing and storage capacity, and some very complex algorithms now.

The question becomes how much inherent randomness will there be in a designed entity as compared to an evolved entity. If we develop circuits small enough to be affected by quantum fluctuations, then there's a much more real possibility for scary-smart-level AI than we're currently playing with.
 
Last edited:
I'm pretty sure the humans as articulating agents could make similar arguments for frogs, leopards, whales, catfish,. .... and bacteria. They don't because doing so would defeat their claims of uniqueness.

I for one am not claiming uniqueness. My view of what constitutes bounded free will exists for any entity that has cognition - including dogs and cats, horses and frogs, and even robots. But the degree of freedom and will varies in relation to intelligence and sapience. Dogs have agency... not nearly as much agency as a human, but more than a spider. Some animals probably have near zero agency. Rocks have no agency at all. And some animals probably have more agency than we give them credit for ;).

I doubt that our human empathy works very well when projected on other species, especially for species that are very different from our own.
I'm sorry - I don't follow where empathy comes into this?

However, it is interesting that you attempt to quantify the degree of "freedom" on a scale of intelligence. There is something to that intuition, because the ability to project future outcomes is what gives us so many alternative paths to resolving goal conflicts and fulfilling our needs and desires. We gain more freedom of choice by expanding the options available to us. That allows us also to justify our choices, i.e. to explain why we chose to do what we did. And, as human beings, it is a facile assumption to believe that our futures are filled with a greater number of choices than those of animals we deem to have less ability to predict future outcomes.
I don't think it's a facile assumption. At the end of the day, some species have straight up fewer neurons than humans, and they're evolved to perform different functions. Most animals don't have pre-fontal cortexes, or have less complex, or smaller ones. But they're certainly not limited to humans. And when we some day meet space aliens, we may find ourselves in the short-bus :p

And that is certainly why something as unpredictable as QM events has nothing at all to do with free will. How do we praise or blame decisions based on quantum fluctuations? If QM had anything to do with free will, then our behavior would be utterly unpredictable. We wouldn't ever bother to try to explain our actions. The question "Why?" would be pointless and meaningless. Nobody could be praised or blamed for anything they did.
What does praise or blame have to do with it? Other than in the most rudimentary case of one member of a group attempting to impose behavior modification onto another?

In regard to QM events... if you assume a model similar to what I outlined previously, using a form of pattern-based clustering of past experiences and a "seek until sufficiently similar" sort of a protocol, QM events would be sufficient to create a non-deterministic scenario and allow for agency.

Given the complete hypothetical of the same entity, with the same past experience, and the same initial conditions... different outcomes could be reached in different trials.
 
Back
Top Bottom