• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Compatibilism: What's that About?

The 'Quantitative' Argument for a Non-Contradictory Acceptance of Agency

This argument starts with a challenge to the fundamental axiom of determinism - that existence is in fact deterministic. To be deterministic, we must have a system in which for any given input or set of inputs, there is exactly and only one possible result. It is best represented as a mathematical formula that falls into the cluster of "n to 1" formulae.

I submit that existence is NOT deterministic, but is rather stochastic. I posit that for any set of inputs, it is possible for more than one result to occur, with each result having a different likelihood.

The premise for a deterministic existence inherently assumes that as long as we have all of the information, we can perfectly predict the outcome of any path of events. This then, requires that it is possible to acquire all information, which subsequently implies that all information is knowable in the first place. And we know that the last clause is false. Not all things are knowable. Some things are unknowable. At a very base minimum, we have quantum effects where it is impossible to simultaneously know a particle's position and velocity at the same time.

I think that unknowability extends to things much larger than quantum particles though. Let's take a simple example: how many leaves did my tree have on it last week? While we might know that an answer exists from a mathematical and philosophical perspective, we cannot actually know that answer. The number of leaves on my tree is obviously a countable number less than infinity. It's a finite number. But what is that number? Nobody knows. And nobody *can* know. Nobody counted the leaves on my tree last week. And even if someone were to have begun counting the number of leaves on my tree last week, within the time span that it would take for them to count the leaves, some leaves would have fallen or some new leaves would have budded. By the time they finished counting, their count would be inaccurate.

We could, however, make a very good estimate of the number of leaves on my tree last week. We would need to know the average number of leaves in a given volume, and whether there were temperature changes that would have caused more or fewer leaves a week ago, and the rough volume of the leaf-bearing structures on the tree. With that, we can get to an estimate that is probably good enough for most purposes.

But it wouldn't be exact. There would still remain an error bound around that estimate. We might estimate 10,000 leaves... but we would have to acknowledge that it might be anywhere between 7,000 and 13,000 for example.

I must conclude that existence is not deterministic, it is stochastic. The set of inputs to any given operation is always incomplete, and is frequently massively incomplete. It is not possible to know every single thing required in order to guarantee and exact singular outcome as the only possibility.

"Okay" you might say, "But that's just randomness, that still doesn't endorse agency". Well, let's move on to that next.

As I said in my prior post, agency is then the ability to apply a pattern to externalities, make a prediction about the likely outcome, and then react to that prediction in order to influence events. Let's walk through the components of this definition.

The ability to find a pattern is inherently dependent on the ability to take in and store external information. In order to have agency of any level, the object must first have a means of perception, a way of observing and interacting with the world around it. What do we mean by perception? Perception requires that the object be able to process and react to external stimuli. The security light at my front door can do that - it senses movement and turns on when certain conditions are met. It processes the external stimuli of movement and reacts by flipping a switch to on. A rock cannot do any of that, it cannot process external stimuli, and it cannot react to that stimuli. There is no coding in a rock that allows it to sort and respond to conditional stimuli, thus a rock cannot have agency.

Being able to perceive externalities is not, however, sufficient by itself. The object must also be able to store salient elements of those perceptions, it must have a memory of at least some capacity. That storage capacity is integral to the ability to determine a pattern. In order to find a pattern, the object must be able to compare the elements of one event to the elements of another event and find commonalities. If there is no means of storage, then no pattern can be found. My porch light doesn't have any storage. All if can do is react, which it does quite nicely. I could attach it to some recording software, which would allow it to record what set it off. But alas, my security light would still not qualify as an agent: it has no means to compare independent recordings against one another to determine a pattern.

The pattern recognition element is necessary in order to make a prediction. And with some of our more advanced technologies, we're getting quite good with pattern recognition. Marketing certainly has done its fair share of pattern recognition. Every time you get a recommendation based on your past Netflix viewing habits, that is pattern recognition in action. Every time Amazon says "other customers also bought this... " they're employing pattern recognition. Amazon also has the means to perceive and store external information; the software observes the purchases that you make as well as other items that you browsed before purchase, and it stores metadata about your purchasing history. That's how it identifies patterns in the first place.

Does Amazon make predictions about whether or not you'll purchase what they suggest? This is where things get fuzzy, and I don't really know for certain. I'm sure that Amazon calculates probabilities with respect to related purchases, and applies those probabilities to prioritize what to suggest. I'm not sure whether they do that in an aggregate fashion or in an individual fashion with probabilities curated for each individual. I think we have a lot of technology that is right at this edge, identifying patterns and making some level of prediction.

There is some gray area between finding a pattern, employing a pattern predictively, and proactively taking action to influence an outcome. There are some solid arguments that could be made that curated advertising has agency - especially if it's dynamic and based on a learning algorithm.

There's a difference between agency and intelligence, which I won't go into here. I think a good argument could be made that many things have agency to varying degrees: Ad software might have very limited agency, as the number of criteria used to determine a pattern, and the number of actions available to make suggestions to influence behavior are necessarily very limited.

On the other hand, I would say that by my argument, my cat certainly has agency, and a decent bit of it as well. Agency is necessary for training, and the more complex the conditioning the more agency is required. Sometimes that training isn't even intentional. For example, my cat like freeze dried salmon treats. They are her favorite, and given the chance she will (and has) gutted the bag and eaten an entire 6 oz of them. For freeze died food, 6 oz is a lot, I still don't know how her stomach didn't explode. Anyway, we play with her when we give her treats. Sometimes we toss them down the hall and she runs after them and chases them. Sometimes she sits at the end of the hall and plays "goalie" with them. Sometimes we give them to her outside in the courtyard. Sometimes we hold them in our hand and she eats them there with her fuzzy little muzzle tickling our fingers. Sometimes we hold them above her so she has to stand on her hind legs like a meerkat in order to get them.

That's all very cute, but lets bring this back around to agency. My cat has learned that these behaviors are associated with treats. She perceived the smell and taste of treats, and she perceived the times of day and the order of routines involved. She knows that after I get up in the morning, there will be treats. Furthermore, she knows that the treats will be given after I have filled her food and water bowl, and after I have filled the coffee pot, and while the coffee is brewing. She anticipates the treats: when I fill the coffee pot and she hears it start, she stands up, because she has identified the pattern than almost always results in treats. Sometimes she's wrong - sometimes I don't have coffee, I have tea. Sometimes she doesn't get treats if she's been constipated recently. But she predicts when those treats will occur.

And beyond that, she engages in proactive behavior to influence the game for treats each day. Sometimes she will go to the door and quite clearly ask to have her treats outside. Sometimes she will run to the end of the hall and indicate that I should toss the treats to her. Sometimes she sits and the front of the hall and looks at me over her shoulder so I know she wants me to throw them so she can chase. Sometimes she meerkats for them without me prompting her at all. She has the agency to indicate what she wants and uses that agency to influence my behavior toward her desired outcome.

That's a lot about agency in here. But what, you may ask, does it have to do with a stochastic existence?

Well, here it is in a nutshell. Given that existence is stochastic, any predictions are probabilistic in nature. Sometimes the probability of a specific outcome is so close to 1.0 as to be guaranteed. Sometimes it's a true coin flip. Most of the time, the number of possible outcomes are bounded; bounded by physical constraints, bounded by time or resources, or in the case of agency, bounded by what the agent can imagine as outcomes. The agent taking action will also be bounded by their perceptive capacity, memory capacity, facility with pattern recognition, and their extrapolative intelligence.

The set of inputs is necessarily limited. Some of the information that may affect an outcome is unknowable. The processes available to an agent are limited. And within all of that there does exist at least some element of pure randomness. As a result, while the outcome may in many cases be highly predictable, it is NOT deterministically knowable.

Sufficiently complex processes have agency, and given a set of inputs that is incomplete and contains some unknowable unknowns, the result of any given decision cannot be perfectly predicted.

The specific causes of specific effects must be reliable in order to create a consistent pattern. For example, I press the "H" key and an "h" appears in my text. But, suppose the effect of pressing the "H" was indeterministic. Suppose that sometimes when I press the "H" I get an "m". Other times I press "H" and a "7" appears.

Let's turn up the indeterminism dial. Now, when I press any key on the keyboard, I get a random letter each no matter which key I press. All I will get is gibberish. My freedom to type my thoughts would be gone.

So, in order to have freedom, we must have control. In order to have control, the results of our actions must be predictable. And, in order for the results of our actions to be predictable, we must have reliable cause and effect.

Freedom requires a deterministic world, a world of reliable cause and effect. Agency requires determinism, or at least a deterministic world.
 
In my view, an AI can have agency. And given a sufficiently large number of inputs to an decision matrix, the outcome of a decision made by an AI can become imperfectly predictable and stochastic in nature.

Seems the number may not need to be terribly large.

Fantastic article :)

I've always though that the theory of evolution was elegant, but too limited. As it stands, evolution applies only to reproducing species. But I think conceptually, the same approach operates throughout all of existence. Everything operates in a dynamic environment, everything is always in flux. Formations - whether they be subatomic particles or molecules or plants or animals - that are most stable within that environment are the formations that persist. And when the environment changes (as it inevitably does) that formation is the prior survivor that now alters within the new environment. So you get increasing compexity of items and systems constantly 'seeking' dynamic equilibrium.

It's a borderline taoist approach: water finds its own level.
 
The 'Quantitative' Argument for a Non-Contradictory Acceptance of Agency

This argument starts with a challenge to the fundamental axiom of determinism - that existence is in fact deterministic. To be deterministic, we must have a system in which for any given input or set of inputs, there is exactly and only one possible result. It is best represented as a mathematical formula that falls into the cluster of "n to 1" formulae.

I submit that existence is NOT deterministic, but is rather stochastic. I posit that for any set of inputs, it is possible for more than one result to occur, with each result having a different likelihood.

The premise for a deterministic existence inherently assumes that as long as we have all of the information, we can perfectly predict the outcome of any path of events. This then, requires that it is possible to acquire all information, which subsequently implies that all information is knowable in the first place. And we know that the last clause is false. Not all things are knowable. Some things are unknowable. At a very base minimum, we have quantum effects where it is impossible to simultaneously know a particle's position and velocity at the same time.

I think that unknowability extends to things much larger than quantum particles though. Let's take a simple example: how many leaves did my tree have on it last week? While we might know that an answer exists from a mathematical and philosophical perspective, we cannot actually know that answer. The number of leaves on my tree is obviously a countable number less than infinity. It's a finite number. But what is that number? Nobody knows. And nobody *can* know. Nobody counted the leaves on my tree last week. And even if someone were to have begun counting the number of leaves on my tree last week, within the time span that it would take for them to count the leaves, some leaves would have fallen or some new leaves would have budded. By the time they finished counting, their count would be inaccurate.

We could, however, make a very good estimate of the number of leaves on my tree last week. We would need to know the average number of leaves in a given volume, and whether there were temperature changes that would have caused more or fewer leaves a week ago, and the rough volume of the leaf-bearing structures on the tree. With that, we can get to an estimate that is probably good enough for most purposes.

But it wouldn't be exact. There would still remain an error bound around that estimate. We might estimate 10,000 leaves... but we would have to acknowledge that it might be anywhere between 7,000 and 13,000 for example.

I must conclude that existence is not deterministic, it is stochastic. The set of inputs to any given operation is always incomplete, and is frequently massively incomplete. It is not possible to know every single thing required in order to guarantee and exact singular outcome as the only possibility.

"Okay" you might say, "But that's just randomness, that still doesn't endorse agency". Well, let's move on to that next.

As I said in my prior post, agency is then the ability to apply a pattern to externalities, make a prediction about the likely outcome, and then react to that prediction in order to influence events. Let's walk through the components of this definition.

The ability to find a pattern is inherently dependent on the ability to take in and store external information. In order to have agency of any level, the object must first have a means of perception, a way of observing and interacting with the world around it. What do we mean by perception? Perception requires that the object be able to process and react to external stimuli. The security light at my front door can do that - it senses movement and turns on when certain conditions are met. It processes the external stimuli of movement and reacts by flipping a switch to on. A rock cannot do any of that, it cannot process external stimuli, and it cannot react to that stimuli. There is no coding in a rock that allows it to sort and respond to conditional stimuli, thus a rock cannot have agency.

Being able to perceive externalities is not, however, sufficient by itself. The object must also be able to store salient elements of those perceptions, it must have a memory of at least some capacity. That storage capacity is integral to the ability to determine a pattern. In order to find a pattern, the object must be able to compare the elements of one event to the elements of another event and find commonalities. If there is no means of storage, then no pattern can be found. My porch light doesn't have any storage. All if can do is react, which it does quite nicely. I could attach it to some recording software, which would allow it to record what set it off. But alas, my security light would still not qualify as an agent: it has no means to compare independent recordings against one another to determine a pattern.

The pattern recognition element is necessary in order to make a prediction. And with some of our more advanced technologies, we're getting quite good with pattern recognition. Marketing certainly has done its fair share of pattern recognition. Every time you get a recommendation based on your past Netflix viewing habits, that is pattern recognition in action. Every time Amazon says "other customers also bought this... " they're employing pattern recognition. Amazon also has the means to perceive and store external information; the software observes the purchases that you make as well as other items that you browsed before purchase, and it stores metadata about your purchasing history. That's how it identifies patterns in the first place.

Does Amazon make predictions about whether or not you'll purchase what they suggest? This is where things get fuzzy, and I don't really know for certain. I'm sure that Amazon calculates probabilities with respect to related purchases, and applies those probabilities to prioritize what to suggest. I'm not sure whether they do that in an aggregate fashion or in an individual fashion with probabilities curated for each individual. I think we have a lot of technology that is right at this edge, identifying patterns and making some level of prediction.

There is some gray area between finding a pattern, employing a pattern predictively, and proactively taking action to influence an outcome. There are some solid arguments that could be made that curated advertising has agency - especially if it's dynamic and based on a learning algorithm.

There's a difference between agency and intelligence, which I won't go into here. I think a good argument could be made that many things have agency to varying degrees: Ad software might have very limited agency, as the number of criteria used to determine a pattern, and the number of actions available to make suggestions to influence behavior are necessarily very limited.

On the other hand, I would say that by my argument, my cat certainly has agency, and a decent bit of it as well. Agency is necessary for training, and the more complex the conditioning the more agency is required. Sometimes that training isn't even intentional. For example, my cat like freeze dried salmon treats. They are her favorite, and given the chance she will (and has) gutted the bag and eaten an entire 6 oz of them. For freeze died food, 6 oz is a lot, I still don't know how her stomach didn't explode. Anyway, we play with her when we give her treats. Sometimes we toss them down the hall and she runs after them and chases them. Sometimes she sits at the end of the hall and plays "goalie" with them. Sometimes we give them to her outside in the courtyard. Sometimes we hold them in our hand and she eats them there with her fuzzy little muzzle tickling our fingers. Sometimes we hold them above her so she has to stand on her hind legs like a meerkat in order to get them.

That's all very cute, but lets bring this back around to agency. My cat has learned that these behaviors are associated with treats. She perceived the smell and taste of treats, and she perceived the times of day and the order of routines involved. She knows that after I get up in the morning, there will be treats. Furthermore, she knows that the treats will be given after I have filled her food and water bowl, and after I have filled the coffee pot, and while the coffee is brewing. She anticipates the treats: when I fill the coffee pot and she hears it start, she stands up, because she has identified the pattern than almost always results in treats. Sometimes she's wrong - sometimes I don't have coffee, I have tea. Sometimes she doesn't get treats if she's been constipated recently. But she predicts when those treats will occur.

And beyond that, she engages in proactive behavior to influence the game for treats each day. Sometimes she will go to the door and quite clearly ask to have her treats outside. Sometimes she will run to the end of the hall and indicate that I should toss the treats to her. Sometimes she sits and the front of the hall and looks at me over her shoulder so I know she wants me to throw them so she can chase. Sometimes she meerkats for them without me prompting her at all. She has the agency to indicate what she wants and uses that agency to influence my behavior toward her desired outcome.

That's a lot about agency in here. But what, you may ask, does it have to do with a stochastic existence?

Well, here it is in a nutshell. Given that existence is stochastic, any predictions are probabilistic in nature. Sometimes the probability of a specific outcome is so close to 1.0 as to be guaranteed. Sometimes it's a true coin flip. Most of the time, the number of possible outcomes are bounded; bounded by physical constraints, bounded by time or resources, or in the case of agency, bounded by what the agent can imagine as outcomes. The agent taking action will also be bounded by their perceptive capacity, memory capacity, facility with pattern recognition, and their extrapolative intelligence.

The set of inputs is necessarily limited. Some of the information that may affect an outcome is unknowable. The processes available to an agent are limited. And within all of that there does exist at least some element of pure randomness. As a result, while the outcome may in many cases be highly predictable, it is NOT deterministically knowable.

Sufficiently complex processes have agency, and given a set of inputs that is incomplete and contains some unknowable unknowns, the result of any given decision cannot be perfectly predicted.

The specific causes of specific effects must be reliable in order to create a consistent pattern. For example, I press the "H" key and an "h" appears in my text. But, suppose the effect of pressing the "H" was indeterministic. Suppose that sometimes when I press the "H" I get an "m". Other times I press "H" and a "7" appears.

Let's turn up the indeterminism dial. Now, when I press any key on the keyboard, I get a random letter each no matter which key I press. All I will get is gibberish. My freedom to type my thoughts would be gone.

So, in order to have freedom, we must have control. In order to have control, the results of our actions must be predictable. And, in order for the results of our actions to be predictable, we must have reliable cause and effect.

Freedom requires a deterministic world, a world of reliable cause and effect. Agency requires determinism, or at least a deterministic world.

Now we're getting somewhere. We're really close, too.

From your recognition that agency requires determinism, it's a really short step to where I stand: that "agency" itself is the description of how a local determinism parses by physical action.
 
The specific causes of specific effects must be reliable in order to create a consistent pattern. For example, I press the "H" key and an "h" appears in my text. But, suppose the effect of pressing the "H" was indeterministic. Suppose that sometimes when I press the "H" I get an "m". Other times I press "H" and a "7" appears.

Let's turn up the indeterminism dial. Now, when I press any key on the keyboard, I get a random letter each no matter which key I press. All I will get is gibberish. My freedom to type my thoughts would be gone.

So, in order to have freedom, we must have control. In order to have control, the results of our actions must be predictable. And, in order for the results of our actions to be predictable, we must have reliable cause and effect.

Freedom requires a deterministic world, a world of reliable cause and effect. Agency requires determinism, or at least a deterministic world.

Don't make the mistake of equating stochastic processes with uniformly random processes. Statistics has clear cause and effect, but it is not deterministic. Statistics is stochastic.

Realistically, your keyboard does have the possibility of random letters in it already. The underlying hardware and software could end up with a bug, it could already have a bug for all you know. But since 99.99999999999999% of the time, any time you hit the "H" key, you're going to see "h" typed out. And that remaining 0.00---1% of the time you'll probably chalk it up to fat fingering the keyboard ;)

Stochastic processes have very clear causes. They have very clear effects. It simply isn't a *single* effect. There are multiple possible effects prior to the event, but ultimately only a single effect will occur.

Consider a bag full of marbles. Before you reach in, you may have a 90% chance of pulling out a red marble, and a 105 chance of pulling out a blue marble. Those chances are real chances. There are two possible effects. But the cause is clearly you sticking your hand in and picking a marble.

After you have chosen a marble, the prior probabilities are no longer relevant. The fact that you had only a 10% chance to select the blue marble that you hold in your hand doesn't alter the fact that you now have a blue marble with 100% certainty.

This gets into some bayesian stuff, which I have mostly forgotten the mechanics of at this point.
 
The specific causes of specific effects must be reliable in order to create a consistent pattern. For example, I press the "H" key and an "h" appears in my text. But, suppose the effect of pressing the "H" was indeterministic. Suppose that sometimes when I press the "H" I get an "m". Other times I press "H" and a "7" appears.

Let's turn up the indeterminism dial. Now, when I press any key on the keyboard, I get a random letter each no matter which key I press. All I will get is gibberish. My freedom to type my thoughts would be gone.

So, in order to have freedom, we must have control. In order to have control, the results of our actions must be predictable. And, in order for the results of our actions to be predictable, we must have reliable cause and effect.

Freedom requires a deterministic world, a world of reliable cause and effect. Agency requires determinism, or at least a deterministic world.

Don't make the mistake of equating stochastic processes with uniformly random processes. Statistics has clear cause and effect, but it is not deterministic. Statistics is stochastic.

Realistically, your keyboard does have the possibility of random letters in it already. The underlying hardware and software could end up with a bug, it could already have a bug for all you know. But since 99.99999999999999% of the time, any time you hit the "H" key, you're going to see "h" typed out. And that remaining 0.00---1% of the time you'll probably chalk it up to fat fingering the keyboard ;)

Stochastic processes have very clear causes. They have very clear effects. It simply isn't a *single* effect. There are multiple possible effects prior to the event, but ultimately only a single effect will occur.

Consider a bag full of marbles. Before you reach in, you may have a 90% chance of pulling out a red marble, and a 105 chance of pulling out a blue marble. Those chances are real chances. There are two possible effects. But the cause is clearly you sticking your hand in and picking a marble.

After you have chosen a marble, the prior probabilities are no longer relevant. The fact that you had only a 10% chance to select the blue marble that you hold in your hand doesn't alter the fact that you now have a blue marble with 100% certainty.

This gets into some bayesian stuff, which I have mostly forgotten the mechanics of at this point.

Bugs is slang for insect interference in older hardware.
I'd rather not go into who said what to coin the terms.
A flaw in a machine from operator error at any level of analysis has never been the sole source of error when the expected or desired outcome is not achieved by the participants or observers.
I maybe alone here Stochastic is not random, that might be due to the difference between formal training and formal education.. blah
 
Bugs is slang for insect interference in older hardware.
I'd rather not go into who said what to coin the terms.
A flaw in a machine from operator error at any level of analysis has never been the sole source of error when the expected or desired outcome is not achieved by the participants or observers.
I maybe alone here Stochastic is not random, that might be due to the difference between formal training and formal education.. blah

I hope that I don't offend, but is English not your primary language?

I have a very difficult time understanding your contributions.

Otherwise, yes, you are correct that stochastic isn't 'random', it's non-deterministic, it's probabilistic.
 
I suspect that DBT understands Marvin's point--that there is such a thing as non-deterministic behavior in a deterministic system

I don't think this is what Marvin has been saying - he's consistently made it clear that there are no non-deterministic events in his account of free will.

I raise this because your comment could cause confusion,

You may have confused "non-deterministic behavior" with "non-deterministic events". I didn't say that there were non-deterministic events in his account. This is all about determined events in a future that agents have no knowledge of. In order to survive, they must calculate likely outcomes and make their choices on that basis and a complex set of predetermined priorities. Human "robots" are able to alter their own set of priorities on the basis of successful and failed past predictions. The agents themselves are also in a constant state of flux. Their survival is enhanced by the fact that they can adapt their "programming" on the basis of accumulated experience to better overcome future obstacles and satisfy goals.
 
If you become an artificial intelligence researcher (as I have been), then you learn a lot about nondeterministic behavior in chaotic environments. The philosophical question that I am injecting into this free will discussion is the following: Can a learning robot have "free will"? One's willingness to answer that question affirmatively depends on how far one is willing to extend the concept to cover an entity whose every action is predetermined, including its ability to learn from experience and adapt to new situations. At some point, everything about the behavior of that robot can be predetermined, but it can make choices and learn to change its behavior when faced with similar obstacles in the future. The robot doesn't know anything more about its future than human beings and other biological organisms. However, whether we say that the robot has "free will" depends on whether it is able to learn and adapt to changing circumstances. What makes its will "free" is that it is free to change its future behavior. In effect, it can regret past behavior, but not change it. It can try to be a better robot in the future. In theory, a robot could even have predetermined routines for improving its learning processes--just as humans can learn to be better learners.

In my view, an AI can have agency. And given a sufficiently large number of inputs to an decision matrix, the outcome of a decision made by an AI can become imperfectly predictable and stochastic in nature.

If a robot has the programming to allow it to learn, and to adapt to changing externalities, and to form preferences (or to reprioritize goals perhaps), and the flexibility to form extrapolative hypotheses and test them... then there's no reason to believe that a robot cannot have free will in the sense that I understand it. I think it's entirely plausible that we will develop AIs that have will.

I think it might be a lot less plausible that we develop AIs that have curiosity, imagination, and emotions. I don't think it's impossible, but I think that aspect of sapience is much more complex than volition.

We are largely in agreement, but I would quibble with your last paragraph. Imagination is necessary in us "robots", because that is the workspace we use to predict future outcomes. It is no accident that natural languages often express the future tense in ways that are very different. For example, English has past and present tense inflection on verbs, but it expresses future tense with a separate auxiliary verb--"shall" or "will". Imaginary scenarios are also expressed with modals. For example, "should" is technically a past tense inflection of "shall". Some languages even seem to lack a special tense marker for future events. Curiosity and emotions also play a functional role in decision-making, since they are factors that motivate and determine the emergence of priorities in making choices. Even robots have to be motivated to recharge their energy sources (i.e. "eat"), discard waste (i.e. "dead batteries"), and repair themselves (i.e. get "fresh batteries"). So it is natural for roboticists to build those factors into their creations as they perfect them. Right now, we are at the stage where humans still have to change the robots' diapers, but it will be necessary for them to be self-sufficient if we keep sending them out to explore moons and planets.
 
If non deterministic events happen within a determined system, this doesn't rescue free will.

Neither random or probabilistic events are subject to will.

Will has no more regulative control over random or probabilistic events than determined.

Random events or probabilistic events act upon the system, the brain, in random or probabilistic ways, which are no more a matter of choice than time t and how things go as a matter of natural law.
 
To me, this definition of determinism contains a critical flaw. Here is what I believe is the more accurate statement:

"Determinism means that the outcome of a decision point is the result of all prior events and influences, and that the outcome is singular: Only one possible outcome will occur."

For example, you had two "possible" outcomes: chocolate and vanilla. A "possibility" is something that you can actualize if you choose to do so. You were offered chocolate, so chocolate was a real possibility. You were also offered vanilla, so vanilla was also a real possibility. You had two real possibilities, but you had to make a choice before you could have either one. So, you gave it some thought, and you chose chocolate.

You could have chosen vanilla if you wanted to, but you preferred chocolate today, so you chose chocolate. And you then told whoever was handing out the ice cream, "I will have chocolate, thank you". That was your freely chosen "will". The final responsible prior cause of the choice was you, of course.

:D I'm going to play Devil's Advocate for a moment. I don't disagree with your conclusion, but I challenge your methodology to reach that conclusion. I'm also playing Devil's Advocate, because I suspect that my view with respect to a deterministic existence will be sufficiently covered by my 'quantitative' argument.

*Channeling a Determinist"

As you wish.

A Hard Determinist said:
That choice is not a real possibility though. You might *think* you have a choice, but you don't actually.

I walk into a restaurant, sit at a table, and browse the menu. I know that I actually have a choice, because I'm staring at a literal menu of choices. I consider items that catch my fancy, and choose the one that I estimate will best satisfy both my tastes and my dietary objectives. All these events occurred in objective reality. I was not imagining them.

A Hard Determinist said:
Because every single element involved in that decision is fixed and (hypothetically) known, you aren't actually exerting any agency.

Then point out the object that did exert agency. And tell the waiter to bring him the bill.

A Hard Determinist said:
You're just running that set of inputs through a very complex algorithm. Any time you run that algorithm with that same set of inputs, you would get the exact same answer.

The algorithm is not that complex. I see several things I like on the menu. But I already had one of them for lunch. My second option would taste great, but it has a high salt and fat content. So, I think I will have the Chef's Salad this time.

And, if we rewound the clock to the same point in time, then I would always make the same choice, for the same reasons, even though I could have chosen anything on the menu. I mean, why would I make a different choice if I had good reasons for it the first time around? I wouldn't.

However, to say that "I wouldn't" does not imply that "I couldn't".

A Hard Determinist said:
So even though it might seem like you're making a decision, you don't actually have a choice.

In empirical reality, it is objectively true that I did, in fact, perform a choosing operation, in which I had a menu full of options, and from which I made my choice. If I didn't do it, then point to the guy who did, and tell the waiter to bring him the bill.

A Hard Determinist said:
You can only ever come to the exact same decision.

No. I will only ever come to the exact same decision. What I can do is a different matter entirely. For example, I could have chosen any item on the menu. But, I wouldn't.

A Hard Determinist said:
It's like a pachinko machine. You drop a marble, it bounces off of pegs, and it ends up in one of several holes at the bottom. If we could know the exact characteristics of every atom that could in any way influence that marble, and if we could drop that marble from the exact same location in space each time, it would always hit the same pegs, and it would always go through the same hole. Your brain is just a really complicated pachinko machine.

The analogy is false, because the pachinko machine lacks the machinery for making choices. I have that machinery.

A Hard Determinist said:
It's only because you're self aware that you think you're making a choice.

No. I objectively observed myself making the choice. Had I not made the choice, I would have gone without ice cream. No other object in the physical universe made that choice for me. I had to do it all by myself.

A Hard Determinist said:
In reality, the ice cream that you "choose" is completely and entirely a result of the atoms in your brain and every atom around you at that time. You don't actually have any ability to alter or change any of those atoms.

A coed is invited to a party, but she has a chemistry exam in the morning. So, she decides to turn down the party and spend the evening studying instead.

She thoroughly reviews her textbook and her lecture notes. As she does so the neural pathways associated with this information become stronger. When she takes the test in the morning, the questions trigger those pathways, and the answer comes to her, just as she hoped.

Note that she has, by her own conscious intent, modified her own brain at the neural level. I would suppose that this moved around a few atoms as well.

A Hard Determinist said:
You're just a complicated program that thinks that it thinks.

In order to think that I think, I must think. Therefore, my belief, that I am doing the thinking, is confirmed.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Some poetic license, sure, because I really really don't think that determinism holds water.

As a compatibilist, I argue that determinism does hold water. Universal causal necessity/inevitability is a logical fact, derived from the presumption of a world of perfectly reliable cause and effect. However, although it is a logical fact, it is neither a meaningful nor a relevant fact.

It is not meaningful, because what I will inevitably do is exactly identical to me just being me, choosing what I choose, and doing what I do. And that is not a meaningful constraint upon any of my freedoms.

It is not relevant, because there is nothing one can, or needs to, do about it. So, it's rather pointless to try to shift responsibility to it. Instead, we hold responsible the causes that we can actually do something about.

All of the implications drawn by the hard determinist, or free will skeptic, are false.

Randomness exists.

The notion of "random" is to deal with the problem of predictability. An event may still be reliably caused, but be unpredictable in practice. We flip a coin to see who goes first, because we want the result to be unpredictable. But the behavior of the coin is still reliably caused by the location and force exerted by the thumb, and then the air resistance, and then by how it bounces on the floor or ground.

We could build a machine that flips the coin in a way that always lands heads up, by controlling those variables. A skilled knife thrower controls the number of rotations of the knife so that the point hits the target instead of the hilt.

So, behavior that appears random is most likely reliable, but unpredictable.


That said, the fundamental argument is the same. Your perception that you are making a decision is an artifact of your mind.

What isn't an artifact of the mind?

In objective reality, ...

Objective reality can also be called an artifact of the mind. The brain organizes sensory data into a model of reality. When the model is accurate enough to be useful, as when we navigate our body through a doorway, then we call that "reality", because the model is our only access to reality. But when the model is inaccurate enough to cause problems, like when we walk into a glass door, thinking it was open, then that is called an "illusion".

... the probability of you choosing chocolate was always and only 100%, and the probability of you choosing vanilla was always and only 0%.

If the probability of my choice was always 100%, then the probability that it would be me, and no other object in the universe, that would do the choosing, was also 100%.


If enough information were known before hand, a sufficiently informed computer would be able to perfectly predict what flavor of ice cream you would have in every single situation ever.

Predicting is not causing. The choice can be reliably caused whether it is ever predicted or not. So, unpredictability does no undermine determinism. And, reliable cause and effect is a prerequisite for predictability.
 
If you become an artificial intelligence researcher (as I have been), then you learn a lot about nondeterministic behavior in chaotic environments. The philosophical question that I am injecting into this free will discussion is the following: Can a learning robot have "free will"? One's willingness to answer that question affirmatively depends on how far one is willing to extend the concept to cover an entity whose every action is predetermined, including its ability to learn from experience and adapt to new situations. At some point, everything about the behavior of that robot can be predetermined, but it can make choices and learn to change its behavior when faced with similar obstacles in the future. The robot doesn't know anything more about its future than human beings and other biological organisms. However, whether we say that the robot has "free will" depends on whether it is able to learn and adapt to changing circumstances. What makes its will "free" is that it is free to change its future behavior. In effect, it can regret past behavior, but not change it. It can try to be a better robot in the future. In theory, a robot could even have predetermined routines for improving its learning processes--just as humans can learn to be better learners.

In my view, an AI can have agency. And given a sufficiently large number of inputs to an decision matrix, the outcome of a decision made by an AI can become imperfectly predictable and stochastic in nature.

If a robot has the programming to allow it to learn, and to adapt to changing externalities, and to form preferences (or to reprioritize goals perhaps), and the flexibility to form extrapolative hypotheses and test them... then there's no reason to believe that a robot cannot have free will in the sense that I understand it. I think it's entirely plausible that we will develop AIs that have will.

I think it might be a lot less plausible that we develop AIs that have curiosity, imagination, and emotions. I don't think it's impossible, but I think that aspect of sapience is much more complex than volition.

We are largely in agreement, but I would quibble with your last paragraph. Imagination is necessary in us "robots", because that is the workspace we use to predict future outcomes. It is no accident that natural languages often express the future tense in ways that are very different. For example, English has past and present tense inflection on verbs, but it expresses future tense with a separate auxiliary verb--"shall" or "will". Imaginary scenarios are also expressed with modals. For example, "should" is technically a past tense inflection of "shall". Some languages even seem to lack a special tense marker for future events. Curiosity and emotions also play a functional role in decision-making, since they are factors that motivate and determine the emergence of priorities in making choices. Even robots have to be motivated to recharge their energy sources (i.e. "eat"), discard waste (i.e. "dead batteries"), and repair themselves (i.e. get "fresh batteries"). So it is natural for roboticists to build those factors into their creations as they perfect them. Right now, we are at the stage where humans still have to change the robots' diapers, but it will be necessary for them to be self-sufficient if we keep sending them out to explore moons and planets.

I was more trying to say that I think programming imagination and emotions in a robot seems like it would be more complex than an adaptive learning algorithm. I think we either already have or are right on the cusp of adaptive learning algorithms already. The intuition/irrational/imagination element I don't think we're close to at the moment.

Then again, I also think that agency and intelligence are very different things.
 
The 'Quantitative' Argument for a Non-Contradictory Acceptance of Agency

This argument starts with a challenge to the fundamental axiom of determinism - that existence is in fact deterministic. To be deterministic, we must have a system in which for any given input or set of inputs, there is exactly and only one possible result. It is best represented as a mathematical formula that falls into the cluster of "n to 1" formulae.

I submit that existence is NOT deterministic, but is rather stochastic. I posit that for any set of inputs, it is possible for more than one result to occur, with each result having a different likelihood.

The premise for a deterministic existence inherently assumes that as long as we have all of the information, we can perfectly predict the outcome of any path of events. This then, requires that it is possible to acquire all information, which subsequently implies that all information is knowable in the first place. And we know that the last clause is false. Not all things are knowable. Some things are unknowable. At a very base minimum, we have quantum effects where it is impossible to simultaneously know a particle's position and velocity at the same time.

I think that unknowability extends to things much larger than quantum particles though. Let's take a simple example: how many leaves did my tree have on it last week? While we might know that an answer exists from a mathematical and philosophical perspective, we cannot actually know that answer. The number of leaves on my tree is obviously a countable number less than infinity. It's a finite number. But what is that number? Nobody knows. And nobody *can* know. Nobody counted the leaves on my tree last week. And even if someone were to have begun counting the number of leaves on my tree last week, within the time span that it would take for them to count the leaves, some leaves would have fallen or some new leaves would have budded. By the time they finished counting, their count would be inaccurate.

We could, however, make a very good estimate of the number of leaves on my tree last week. We would need to know the average number of leaves in a given volume, and whether there were temperature changes that would have caused more or fewer leaves a week ago, and the rough volume of the leaf-bearing structures on the tree. With that, we can get to an estimate that is probably good enough for most purposes.

But it wouldn't be exact. There would still remain an error bound around that estimate. We might estimate 10,000 leaves... but we would have to acknowledge that it might be anywhere between 7,000 and 13,000 for example.

I must conclude that existence is not deterministic, it is stochastic. The set of inputs to any given operation is always incomplete, and is frequently massively incomplete. It is not possible to know every single thing required in order to guarantee and exact singular outcome as the only possibility.

"Okay" you might say, "But that's just randomness, that still doesn't endorse agency". Well, let's move on to that next.

As I said in my prior post, agency is then the ability to apply a pattern to externalities, make a prediction about the likely outcome, and then react to that prediction in order to influence events. Let's walk through the components of this definition.

The ability to find a pattern is inherently dependent on the ability to take in and store external information. In order to have agency of any level, the object must first have a means of perception, a way of observing and interacting with the world around it. What do we mean by perception? Perception requires that the object be able to process and react to external stimuli. The security light at my front door can do that - it senses movement and turns on when certain conditions are met. It processes the external stimuli of movement and reacts by flipping a switch to on. A rock cannot do any of that, it cannot process external stimuli, and it cannot react to that stimuli. There is no coding in a rock that allows it to sort and respond to conditional stimuli, thus a rock cannot have agency.

Being able to perceive externalities is not, however, sufficient by itself. The object must also be able to store salient elements of those perceptions, it must have a memory of at least some capacity. That storage capacity is integral to the ability to determine a pattern. In order to find a pattern, the object must be able to compare the elements of one event to the elements of another event and find commonalities. If there is no means of storage, then no pattern can be found. My porch light doesn't have any storage. All if can do is react, which it does quite nicely. I could attach it to some recording software, which would allow it to record what set it off. But alas, my security light would still not qualify as an agent: it has no means to compare independent recordings against one another to determine a pattern.

The pattern recognition element is necessary in order to make a prediction. And with some of our more advanced technologies, we're getting quite good with pattern recognition. Marketing certainly has done its fair share of pattern recognition. Every time you get a recommendation based on your past Netflix viewing habits, that is pattern recognition in action. Every time Amazon says "other customers also bought this... " they're employing pattern recognition. Amazon also has the means to perceive and store external information; the software observes the purchases that you make as well as other items that you browsed before purchase, and it stores metadata about your purchasing history. That's how it identifies patterns in the first place.

Does Amazon make predictions about whether or not you'll purchase what they suggest? This is where things get fuzzy, and I don't really know for certain. I'm sure that Amazon calculates probabilities with respect to related purchases, and applies those probabilities to prioritize what to suggest. I'm not sure whether they do that in an aggregate fashion or in an individual fashion with probabilities curated for each individual. I think we have a lot of technology that is right at this edge, identifying patterns and making some level of prediction.

There is some gray area between finding a pattern, employing a pattern predictively, and proactively taking action to influence an outcome. There are some solid arguments that could be made that curated advertising has agency - especially if it's dynamic and based on a learning algorithm.

There's a difference between agency and intelligence, which I won't go into here. I think a good argument could be made that many things have agency to varying degrees: Ad software might have very limited agency, as the number of criteria used to determine a pattern, and the number of actions available to make suggestions to influence behavior are necessarily very limited.

On the other hand, I would say that by my argument, my cat certainly has agency, and a decent bit of it as well. Agency is necessary for training, and the more complex the conditioning the more agency is required. Sometimes that training isn't even intentional. For example, my cat like freeze dried salmon treats. They are her favorite, and given the chance she will (and has) gutted the bag and eaten an entire 6 oz of them. For freeze died food, 6 oz is a lot, I still don't know how her stomach didn't explode. Anyway, we play with her when we give her treats. Sometimes we toss them down the hall and she runs after them and chases them. Sometimes she sits at the end of the hall and plays "goalie" with them. Sometimes we give them to her outside in the courtyard. Sometimes we hold them in our hand and she eats them there with her fuzzy little muzzle tickling our fingers. Sometimes we hold them above her so she has to stand on her hind legs like a meerkat in order to get them.

That's all very cute, but lets bring this back around to agency. My cat has learned that these behaviors are associated with treats. She perceived the smell and taste of treats, and she perceived the times of day and the order of routines involved. She knows that after I get up in the morning, there will be treats. Furthermore, she knows that the treats will be given after I have filled her food and water bowl, and after I have filled the coffee pot, and while the coffee is brewing. She anticipates the treats: when I fill the coffee pot and she hears it start, she stands up, because she has identified the pattern than almost always results in treats. Sometimes she's wrong - sometimes I don't have coffee, I have tea. Sometimes she doesn't get treats if she's been constipated recently. But she predicts when those treats will occur.

And beyond that, she engages in proactive behavior to influence the game for treats each day. Sometimes she will go to the door and quite clearly ask to have her treats outside. Sometimes she will run to the end of the hall and indicate that I should toss the treats to her. Sometimes she sits and the front of the hall and looks at me over her shoulder so I know she wants me to throw them so she can chase. Sometimes she meerkats for them without me prompting her at all. She has the agency to indicate what she wants and uses that agency to influence my behavior toward her desired outcome.

That's a lot about agency in here. But what, you may ask, does it have to do with a stochastic existence?

Well, here it is in a nutshell. Given that existence is stochastic, any predictions are probabilistic in nature. Sometimes the probability of a specific outcome is so close to 1.0 as to be guaranteed. Sometimes it's a true coin flip. Most of the time, the number of possible outcomes are bounded; bounded by physical constraints, bounded by time or resources, or in the case of agency, bounded by what the agent can imagine as outcomes. The agent taking action will also be bounded by their perceptive capacity, memory capacity, facility with pattern recognition, and their extrapolative intelligence.

The set of inputs is necessarily limited. Some of the information that may affect an outcome is unknowable. The processes available to an agent are limited. And within all of that there does exist at least some element of pure randomness. As a result, while the outcome may in many cases be highly predictable, it is NOT deterministically knowable.

Sufficiently complex processes have agency, and given a set of inputs that is incomplete and contains some unknowable unknowns, the result of any given decision cannot be perfectly predicted.

If we look up the definition of 'agent' we get: something that produces or is capable of producing an effect

Linguistically, it sounds like the term is a short-hand to say: this [object/thing/being] should be given real consideration because it could have an effect on our own well-being. But beyond that it's really a generalization and not a binary; there is no clear delineation, or sharp boundary on when something does or does not have agency. Point being (throwing back to my earlier post) that the phrase agency is just a convenient linguistic construct, and doesn't actually tell us anything specific about what we're describing.

IMO, this is important because discussing ipso facto agency doesn't really get us closer to the definition, meaning, or objective reality of a human life. But, on the other hand, you've already described a number of other properties of human beings: pattern recognition, memory, stochastic existence etc. To me it's actually knowing these qualities which is important to understanding human life and experience. It doesn't really matter whether they imply free will or agency, or anything else, because these properties describe what we actually are objectively. Ultimately, they can't prove that we have free will or agency because these two terms are just linguistic constructs with no concrete definition. We're free to call people agents if we want to, but that doesn't really tell us anything meaningful about their lived experience. So again, being stochastic, having pattern recognition etc is what's actually important to understanding our lived experience, rather than obsessing over whether we are/are not free, or are/are not agents.

Further, if we're looking at the concept of freedom I think it's also crucial to include the environment in which we live and survive. To me one of the very tangible constraints on our freedom isn't how we function, but how we can't escape our own culture, biological needs, and moral law. In a very real way we aren't free not because of the implications of physical law, but because culture and biology limits the range of our behaviour in a very real way.
 
Bugs is slang for insect interference in older hardware.
I'd rather not go into who said what to coin the terms.
A flaw in a machine from operator error at any level of analysis has never been the sole source of error when the expected or desired outcome is not achieved by the participants or observers.
I maybe alone here Stochastic is not random, that might be due to the difference between formal training and formal education.. blah

I hope that I don't offend, but is English not your primary language?

I have a very difficult time understanding your contributions.

Otherwise, yes, you are correct that stochastic isn't 'random', it's non-deterministic, it's probabilistic.
Enligh is what I'm using. Explain what you will without determinism.
 
I walk into a restaurant, sit at a table, and browse the menu. I know that I actually have a choice, because I'm staring at a literal menu of choices. I consider items that catch my fancy, and choose the one that I estimate will best satisfy both my tastes and my dietary objectives. All these events occurred in objective reality. I was not imagining them.

It is all performed before you are aware of what you are looking at, feeling, considering or thinking. The selection is made (the only possible action in that moment in time), milliseconds prior to it being brought to conscious awareness.
 
Compatibilism does.

It really doesn't.

After all the painstaking explanations from Marvin (and many others over the years) you still fundamentally misunderstand compatibilism.

Some compatibilists clearly want it both ways, as I have pointed out numerous times and supported by independent accounts, citations, etc. I know what the given definition of compatibilism is, yet Marvin has suggested the possibility of choice, regulative control and doing otherwise.

You must be skimming.
 
Compatibilism does.

It really doesn't.

After all the painstaking explanations from Marvin (and many others over the years) you still fundamentally misunderstand compatibilism.

Some compatibilists clearly want it both ways, as I have pointed out numerous times and supported by independent accounts, citations, etc. I know what the given definition of compatibilism is, yet Marvin has suggested the possibility of choice, regulative control and doing otherwise.

You must be skimming.

Well, I don't disagree with most of the fundamental ideas that Marvin presents. I've been running fast and hard and loose, hacking my way through philosophy because I don't owe anyone here anything but to say what I say freely: I know I am wrong, but am attempting to surround my wrongness from every direction I can find. Perhaps I will isolate "wrongness" itself. Or perhaps someone else will.

"Free will" as a useful concept, starts at a certain point, but determinism is not anywhere near that point. Determinism and agency have to filter all the way through self modification strategies before "self" and "goal" begin to interact with "other" in a way where ideas like "coercion" and "freedom" take shape.

It's like math. Set theory looks a lot different than "1+1" at it's very base. Ethics and determinism and philosophy of agency is very different at this level too. In many ways it is dangerous to discuss perfect translation and transcription before one understands the fundamental value of social through "abstract approximation" (I don't have a better term?)

Anyway, if anyone wants to discuss Demonology of Faeries just spin up a thread and ping me in PM.

I find that a lot more fun than discussing trans bullshit or even this, though this discussion has helped clarify and generalize certain frameworks even for myself.

To DBT: Marvin has suggested, perhaps, no different than I say above, that somewhere walking high above the set theory of agency, there is an abstraction which demands some discussion of coercion, and freedom from it being something to coerce of the world around us.

This is compatibilism: not that free will and determinism are a dichotomous system but that free will approximates or perhaps even offers a simplification when driving the abstract principles from the base ones.

My participation here was for the sake of driving away such special constructs at the base level of the discussion, to be general.
 
No moment in time has special properties over the one before or the one after.

(Actually, the key properties of "a moment in time" is where everything is and what everything is doing, at that moment. In the next moment, the interaction of things in the previous moment will causally determine where they will be and what they will be doing in this new moment. But that's another topic).


That is essentially what I said. The point being, that it is the information conditions in the next moment that enables (determines) an action that was not available to you a moment ago. Which is not a matter of choice, or something that you willed, just a determined web of events unfolding.

Within a determined system, you are simply a part of its unfolding progression of events over time (unless we have block time, which is another topic)

And, at the end of choosing, we will still say "You were able to choose A and you were able to choose B, but you decided to choose A, even though you could have chosen B".


That is the illusion of limited perspective. We talk like that, and from our limited perspective it makes sense because that is how things appear to be. Yet appearances can be deceptive.

That it appears to us that 'you were able to choose A and you were able to choose B, but you decided to choose A, even though you could have chosen' is an illusion for the reasons given above: that you don't have actual options at any point in time , only apparent options because each moment in time is determined. Consequently, when you are [apparently] presented with option A or B, what proceeds - time t and a matter of natural law - is determined by elements beyond your regulative control and what transpires is your only possible response.

There is no regulative control in determinism. Compabilism is based on the ability to act without coercion or compulsion...which is problematic for the given reasons.
 
Some compatibilists clearly want it both ways, as I have pointed out numerous times and supported by independent accounts, citations, etc. I know what the given definition of compatibilism is, yet Marvin has suggested the possibility of choice, regulative control and doing otherwise.

You must be skimming.

Well, I don't disagree with most of the fundamental ideas that Marvin presents. I've been running fast and hard and loose, hacking my way through philosophy because I don't owe anyone here anything but to say what I say freely: I know I am wrong, but am attempting to surround my wrongness from every direction I can find. Perhaps I will isolate "wrongness" itself. Or perhaps someone else will.

"Free will" as a useful concept, starts at a certain point, but determinism is not anywhere near that point. Determinism and agency have to filter all the way through self modification strategies before "self" and "goal" begin to interact with "other" in a way where ideas like "coercion" and "freedom" take shape.

It's like math. Set theory looks a lot different than "1+1" at it's very base. Ethics and determinism and philosophy of agency is very different at this level too. In many ways it is dangerous to discuss perfect translation and transcription before one understands the fundamental value of social through "abstract approximation" (I don't have a better term?)

Anyway, if anyone wants to discuss Demonology of Faeries just spin up a thread and ping me in PM.

I find that a lot more fun than discussing trans bullshit or even this, though this discussion has helped clarify and generalize certain frameworks even for myself.

To DBT: Marvin has suggested, perhaps, no different than I say above, that somewhere walking high above the set theory of agency, there is an abstraction which demands some discussion of coercion, and freedom from it being something to coerce of the world around us.

This is compatibilism: not that free will and determinism are a dichotomous system but that free will approximates or perhaps even offers a simplification when driving the abstract principles from the base ones.

My participation here was for the sake of driving away such special constructs at the base level of the discussion, to be general.

The whole universe act without coercion, compulsion or the ability to choose otherwise.....
 
Some compatibilists clearly want it both ways, as I have pointed out numerous times and supported by independent accounts, citations, etc. I know what the given definition of compatibilism is, yet Marvin has suggested the possibility of choice, regulative control and doing otherwise.

You must be skimming.

Well, I don't disagree with most of the fundamental ideas that Marvin presents. I've been running fast and hard and loose, hacking my way through philosophy because I don't owe anyone here anything but to say what I say freely: I know I am wrong, but am attempting to surround my wrongness from every direction I can find. Perhaps I will isolate "wrongness" itself. Or perhaps someone else will.

"Free will" as a useful concept, starts at a certain point, but determinism is not anywhere near that point. Determinism and agency have to filter all the way through self modification strategies before "self" and "goal" begin to interact with "other" in a way where ideas like "coercion" and "freedom" take shape.

It's like math. Set theory looks a lot different than "1+1" at it's very base. Ethics and determinism and philosophy of agency is very different at this level too. In many ways it is dangerous to discuss perfect translation and transcription before one understands the fundamental value of social through "abstract approximation" (I don't have a better term?)

Anyway, if anyone wants to discuss Demonology of Faeries just spin up a thread and ping me in PM.

I find that a lot more fun than discussing trans bullshit or even this, though this discussion has helped clarify and generalize certain frameworks even for myself.

To DBT: Marvin has suggested, perhaps, no different than I say above, that somewhere walking high above the set theory of agency, there is an abstraction which demands some discussion of coercion, and freedom from it being something to coerce of the world around us.

This is compatibilism: not that free will and determinism are a dichotomous system but that free will approximates or perhaps even offers a simplification when driving the abstract principles from the base ones.

My participation here was for the sake of driving away such special constructs at the base level of the discussion, to be general.

The whole universe act without coercion, compulsion or the ability to choose otherwise.....

Compatibilism says the level of agency that is "universe" is not an appropriate place for discussing free will; that "agency" and "free will" exist entirely in different forms of frame. That one is discussing {{1},{1,2}}•{{1}}~{{1},{1,2},{1,2,3}} and the other is discussing "what is an apple, and why do I want one?"

My purpose here was to discuss the idea, see other views, and see what my views were in the presence of them, and to speak them so that others may be as wrong as I am.
 
The whole universe act without coercion, compulsion or the ability to choose otherwise.....

Compatibilism says the level of agency that is "universe" is not an appropriate place for discussing free will; that "agency" and "free will" exist entirely in different forms of frame. That one is discussing {{1},{1,2}}•{{1}}~{{1},{1,2},{1,2,3}} and the other is discussing "what is an apple, and why do I want one?"

My purpose here was to discuss the idea, see other views, and see what my views were in the presence of them, and to speak them so that others may be as wrong as I am.

Compatibilists give their definition of free will, incompatibilists in turn argue ''that a deterministic universe is completely at odds with the notion that persons have free will, the latter being defined as the capacity of conscious agents to choose a future course of action among several available physical alternatives. Thus, incompatibilism implies that there is a dichotomy between determinism and free will''
 
Back
Top Bottom