• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Compatibilism: What's that About?

I've presented the case for computers being classified as mental things which may not be considered independent of mental perception
And it's a clearly bad case, rotten to it's core decayed like bad wood by a lifetime of misconception.

It comes directly from the definition I presented. Argue with Webster.
Ah, argumentum ad dictum.

The computer is an object. I repeat, it would be exactly the thing it was if you found it on the road not knowing what it was or where it came from. You could observe the same relationships of the same parts.

A computer is an object and may be considered as one.
So does the computer you see exhibit the same properties as the one over there not seen by you, running an experiment for measuring human responses to the movement of sounds it is commanding to be moving in the anechoic chamber? Or is it an analog computer or a digital compute? Are you saying all computers are the same regardless of what or whom, how, why, or where one observes them?

They are all objects. But it takes more than an observer's declaration to specify something is an object. It matters where and why one declares a computer an object since objects may be material or mental derivatives.

Seems to be you are saying a computer is an abstraction of a particular machine matching your mental model, not an object at all. Your notions of object aren't descriptive or prescriptive.
 
I've presented the case for computers being classified as mental things which may not be considered independent of mental perception
And it's a clearly bad case, rotten to it's core decayed like bad wood by a lifetime of misconception.

It comes directly from the definition I presented. Argue with Webster.
Ah, argumentum ad dictum.

The computer is an object. I repeat, it would be exactly the thing it was if you found it on the road not knowing what it was or where it came from. You could observe the same relationships of the same parts.

A computer is an object and may be considered as one.
So does the computer you see exhibit the same properties as the one running over there not seen by you, running an experiment for measuring human responses to the movement of sounds it is commanding to be moving in the anechoic chamber? Are you saying all computers are the same regardless of what or whom observes them?

Seems to be you are saying a computer is an abstraction of a particular machine matching your mental model, not an object at all.
No, I am saying specific computers are the same regardless of who is watching the specific computer.

Hence why I point, to exemplify this fact, at a specific computer, namely the computer in the corner of my office.

It is an object made of sand metal glass etc. With a specific shape and set of properties.


Look back very carefully on the language I use. "Computers are objects", "the computer is an object"

It has nothing to do with the mental model of who looks at the computer, it's just a thing, being as it is.

It just so happens that the thing, the object that it is contains a thing, which we would describe as "a list of instructions with a requirement", and that it observably shall meet it's requirement or fail it's requirement. Which one happens is always the thing that is going to happen. We all recognize which. And if the thing that happens is shaped one way we call it "free" and if it happens the other way we call it "unfree". It was always going to be either "free" or "unfree", whichever it happened to be, same way as when you look at the spin on a particle, you can expect it to only be one of "up" or "down".

In the same way where if you could predict the spinning a particle before you looked because you know the secret sauce of the RNG, it would still be "up", or "down", it would still have this property.

And so too the computer, not some imaginary idea of a computer that mutates into everything all the time, but a specific computer, in some specific permutation, containing a will, that some of the wills are notably "free" by compatibilist definitions of such, even if we knew the result before it came to be of the transistors, and that it would be identifiably doing so even if it were being observed by an alien, by an ancient Roman finding a thing just so in the middle of the road, or by a human today, or by myself.
 
No, I am saying specific computers are the same regardless of who is watching the specific computer.

Hence why I point, to exemplify this fact, at a specific computer, namely the computer in the corner of my office.

It is an object made of sand metal glass etc. With a specific shape and set of properties.


Look back very carefully on the language I use. "Computers are objects", "the computer is an object"

It has nothing to do with the mental model of who looks at the computer, it's just a thing, being as it is.

It just so happens that the thing, the object that it is contains a thing, which we would describe as "a list of instructions with a requirement", and that it observably shall meet it's requirement or fail it's requirement. Which one happens is always the thing that is going to happen. We all recognize which. And if the thing that happens is shaped one way we call it "free" and if it happens the other way we call it "unfree". It was always going to be either "free" or "unfree", whichever it happened to be, same way as when you look at the spin on a particle, you can expect it to only be one of "up" or "down".

In the same way where if you could predict the spinning a particle before you looked because you know the secret sauce of the RNG, it would still be "up", or "down", it would still have this property.

And so too the computer, not some imaginary idea of a computer that mutates into everything all the time, but a specific computer, in some specific permutation, containing a will, that some of the wills are notably "free" by compatibilist definitions of such, even if we knew the result before it came to be of the transistors, and that it would be identifiably doing so even if it were being observed by an alien, by an ancient Roman finding a thing just so in the middle of the road, or by a human today, or by myself.
Right out of the bag you say "specific computers are the same regardless of who is watching the specific computer". Yet I mentioned at least three types of computer observed and unobserved obviously different from each other in many material and functional ways. It would be one thing if each named computer worked the same way. But that would be a stretch equivalent to saying humans and life were the same thing because they had DNA in common. Just not usable in particular discourse.

Let me try another tack. Some machines are used to build planes others are used to turn wood. They are both machines, yet one is a riveter and the other is a lathe. Your argument that all computers are computers in spite of different function and purpose fails for the same reason. No one insists both are identically machines. One is digital, the other analog they both compute but they are different.

You need to loosen up from your embedded notion that what is viewed by a person is the same as what there independent of the person. One exists, the other exists in the mind. Very important to understanding the differences between material and subjective.

Critical to understand that what is observed by the mind is never subject to what the mind attributes to the observation. What the mind attributes to it is not the same as what is materially existent to the object independent of being observed. Its why one conducts experiments independent of mental observation.

It is clear in your argument. You include one's mind in every instance. You never consider the nature of the object independent of one's observation of it.
 
Last edited:
  • Like
Reactions: DBT
jarhyn said:
specific computers are the same regardless of who is watching the specific computer
"Harry that Tom sees is exactly the same Harry that Bob sees".

This is acknowledgment of the axiom of universal objectivity and naturalism in general in general: that material things observed are the same thing regardless of who is observing them.
Yet I mentioned at least three types of computer observed
"Harry is not the same object as observed by both Bob and Tom because Harry is different from Bob is different from Tom"

This is nonsense, however. It conflates the difference of opinions held by Bob and Tom of Harry as making Harry anything less than an object, the same as seen both by Bob and Tom regardless of their differing opinions of Harry

Now replace Harry with "the computer in my living room, that one specific object".

Or any specific object at all.

You can't say a rock, person, computer, any given thing is not an object, or somehow different from itself just because different objects are different.

Or I guess you can, but when you do you invoke nonsense.

Given that you step off from abject nonsense, an observably false starting point and one I cannot possibly believe you don't see, it makes me wonder what your game is here.
 
Free will is when our choosing is free of coercion and other forms of undue influence. Nothing more. Nothing less.

Still an assertion. A misapplied label.


In which case computer software, algorithms, etc, have free will and may be considered to be moral agents acting according to their own nature and makeup without coercion or force......I don't think so!

I don't think so either. Computers and robots are machines we create to help us do our will. They have no will of their own.

Machines that can make rational decisions based on sets of criteria. The criteria determining the action taken.... in principle just as a human brain does, without coercion or force, able to beat master chess players of its own accord, its makeup and function as an information processor....which is the evolved function of a brain.

Neither operate on the principle of free will.

Compatibilists assert free will where necessitation, not free will, determines outcomes.

You still don't get that our choosing necessitates our will which necessitates our action. Free will is the mechanism of necessitation for any deliberate act (except, of course, when the mechanism includes coercion or undue influence).

Except that it's not our will that necessitates actions. That is achieved by information input interacting with the state of the system, neural architecture and memory function. Evidence from neuroscience, etc, etc.....



It is not "either free will or necessitation". It is both, at the same time, in the same event!

Nope. Information processing is the key, then pattern recognition enabled by memory function and so on.....

''Neuroscientists have repeatedly pointed out that pattern recognition represents the key to understanding cognition in humans. Pattern recognition also forms the very basis by which we predict future events, i e. we are literally forced to make assumptions concerning outcomes, and we do so by relying on sequences of events experienced in the past.''

Huettel et al. point out that their study identifies the role various regions of prefrontal cortex play in moment-to-moment processing of mental events in order to make predictions about future events. Thus implicit predictive models are formed which need to be continuously updated, the disruption of sequence would indicate that the PFC is engaged in a novelty response to pattern changes. As a third possible explanation, Ivry and Knight propose that activation of the prefrontal cortex may reflect the generation of hypotheses, since the formulation of an hypothesis is an essential feature of higher-level cognition.''




Causal necessity eliminates alternate decisions and free choice, which by definition assumes the ability to have chosen a different option whenever a set of alternatives is being presented.

Causal necessity cannot eliminate anything and still remain causal necessity. Remove one of those dominoes from the chain, and the "unfolding of events" simply ceases. Therefore, rule number one is: do not pretend that some events are not happening. All of the events are actually going to happen.

Alternate possibilities are not a feature, attribute or aspect of determinism. There are no possible alternate actions within a deterministic system.

There is no place in the given definition of determinism that allows alternate actions. Everything must necessary proceed as determined, no deviations.

The decision to have dinner at the restaurant causes us to drive to the restaurant, sit at a table, browse the menu, consider our options, and then decide what we will order. The process of deciding what we will order causes us to think about the bacon and eggs we had for breakfast and the double cheeseburger we had for lunch. And we also think about our dietary goal to eat more vegetables. So, even though the steak looks delicious, we decide we will order the salad instead.

It doesn't start at the decision to have dinner at the restaurant, antecedent events bring you to that point and you proceed accordingly, every step fixed by the state before and determines the next, unless we aren't talking about determinism?


That is how causal necessity works. It is about us deciding for ourselves what we will do and then doing it. Causal necessity is about us and what we are doing. Causal necessity itself never does anything. It does not change any of the facts as to what is actually happening.

We don't exist in a vacuum, we don't choose out state and condition, we are not separate from the world at large and we cannot operate, if determined, outside of what is determined.


The brain is in state of fixed increments from moment to moment, therefore only one option is possible in that instance, followed by the next, then the next.

That works for me. I thought of the steak option, which caused me to think of the bacon and eggs for breakfast, and then the double cheeseburger for lunch. Since I wanted to balance my diet, I began considering the salad option, which caused good feelings due to my dietary goals. I did consider my options, one at a time, just as you suggested. But nothing has changed. It is still me deciding for myself what I will have for dinner through a reliable chain of mental events. You know, that causal necessity thing, that's actually about me causing things in a necessary way.

What is Necessitated is not freely willed or freely chosen;


What Does Deterministic System Mean?
''A deterministic system is a system in which a given initial state or condition will always produce the same results. There is no randomness or variation in the ways that inputs get delivered as outputs.''



Necessity
is the idea that everything that has ever happened and ever will happen is necessary, and cannot be otherwise. Necessity is often opposed to chance and contingency. In a necessary world there is no chance. Everything that happens is necessitated.''
.

If it's determined by the circumstances and the state of your brain that you choose chocolate over vanilla, vanilla was never a possibility or an option for you in that instance.

If vanilla was on the menu, then it was a real possibility. If chocolate was on the menu, then it was yet another real possibility. That's two real possibilities. The fact that I chose the chocolate does not mean that vanilla was at any time not a real possibility!

A lot of things are on the menu, yet according to the definition of determinism, only one option is realizable in any given instance in time; the determined option. Determined is neither freely willed or freely selected, it is fixed by antecedents as each step of the process unfolds from initial conditions time t.


Again, I would suggest that you still do not understand the meaning of "possibility". I do not need to choose the vanilla for it to be possible to choose it. It only needs to be there, available for me to choose it if I want.

I understand the meaning of possibility and I apply that meaning to the given definition of determinism, which does not permit alternate actions, therefore other possibilities. Without alternate actions, there are no alternate possibilities.


Every option on the menu was realizable before we even opened the menu. It was realizable during our choosing and even after we made our choice. That's what realizable means, that it can be realized, even if it never is realized.

Realizable by different people, each according to their state and condition in the instance of selection: the only possible action in that moment in time. Other actions open as events progress and conditions change, not because that is freely willed or freely chosen but because each action/event leads to the next with no possible deviation.

That's why free will is incompatible with determinism.

Impossible.

If you don't understand "possible" then you probably don't understand "impossible".

I understand both in relation to determinism. Which I have explained.

Though some do appear to have trouble understanding the implications of determinism.

If any option is open at any given moment in time, you are not talking about determinism at all, but something else, some magical quantum world on a macro scale.

I'll repeat: ALL EVENTS ARE ALWAYS CAUSALLY NECESSARY. I presume this is just as true of the quantum world as it is everywhere else. However, objects behave differently according to how they are organized. Different levels of organization, whether quantum, physical, biological, or intelligent will involve different rules of behavior due to different causal mechanisms. (This is why we heat our breakfast in the microwave and drive our car to work, instead of the other way around).


Of course all events are always causally necessary, that is my point. It's what I have been arguing all along, including what I have described above.
 
misapplied label.
The misapplication of a label is to say that a label here is "misapplied". It begs the question of whether you are right.

Par for the course for you, though.
Computers and robots are machines we create to help us do our will. They have no will of their own.
Wrong. For the same reason that FDI is wrong.

Once they contain the will, they have it, and it is theirs.

Let's take a thought experiment from a movie: John Dies at the End. If you haven't seen it yet and I'm spoiling it for you, tough, it's been out for forever.

David and John, as the result of a number of ridiculous, implausible, and impossible adventures end up having a discussion with an abomination who tells the duo that all it would take to make one of them into a pedophile is a small change to a few neurons.

Let's imagine then that Korrok does this thing to John, and is the only one capable of reversing it and Korrok refuses to do so.

We now have a situation where Korrok is responsible for the fact that John is a pedophile.

That's all fucked up, but it's the way it is now.

But John the pedophile is now also responsible, as John the Pedophile, for every child he molests. We cannot just say "punish Korrok for John being a pedophile", because regardless whether he is the victim of coercion, John is still a troublesome thing that we can't make any less a pedophile.

In short, regardless of the fact Korrok mind controlled John into being a pedophile, John still holds wills, those wills are freely willed by the person he now is, and he is responsible for them.

So in this way both are responsible, Korrok for making John a pedophile, and John for his continued pedophilia.

Similarly, it doesn't matter where the original program came from as far as deriving responsibility in the moment to the machine, because the machine still holds that will. You cannot change the machine's will by changing the programmer because once it holds the will, the machine holds it, and unlike John, does so freely on account of lacking the will to reject the wills others offer.

If that machine is just a drone that randomly goes around and kills people, while apprehending the maker of that drone prevents new drones that kill people from launching (owing to some psychopath having a will to create murder drones), the original murderbot will still be murdering people owing to the murderbot's will, freely held because it doesn't discriminate on wills it accepts work on.

This is because computers while have their own wills, they can't be "coerced" usually on account of the fact that they are not normally configured to care about what they are asked/told to do. It is not coercion when you say to the guy to your left "hey, you should go kill that old lady over there" and he does. It's suggestion and their consent to do it, and their will they act upon because they accepted this task. Similarly, it is not coercion for the computer when you "whisper in it's ear" a task.

We might in fact say "holding the will to do anything someone asks you as a thing capable of killing people is not a will we would allow to be free". And hence we might make a law against the very manufacture and ownership of drones that can become murderbots and likewise against the freedom of things which murder folks on the mere suggestion to do so.

The person whispering is responsible for what they do, but the pedophile, machine, and lady killer are all responsible for the wills they hold at that point.

They will all respectively do what they are asked, but they, not the programmer, will be the ones doing it.
 
Free will is when our choosing is free of coercion and other forms of undue influence. Nothing more. Nothing less.

Still an assertion.

It is an assertion backed up by the three dictionaries I quoted. But don't worry, your definition is found there too, in second place.

A misapplied label.

The label is applied as defined. If we see what is commonly understood to be a cat, then we call it a "cat". If we see a dog we call it a "dog".
And when we see someone deciding for themselves what they will do, while free of coercion and undue influence, we call it "free will". It is all very straightforward.

Machines that can make rational decisions based on sets of criteria. The criteria determining the action taken.... in principle just as a human brain does, without coercion or force, able to beat master chess players of its own accord, its makeup and function as an information processor....which is the evolved function of a brain.

The machine plays chess because that's what we programmed it to do. The microwave oven cooks our food because that's what we built it to do. In all cases the machine is functioning to satisfy our will, because it has no will of its own.

When a machine starts acting as if it had a will of its own, we usually call someone to repair it.

Neither operate on the principle of free will.

Free will is not an operating principle. Free will simply describes the conditions of the choosing operation: Was coerced or free of coercion? Was it unduly influenced or free of undue influence? This is really simple stuff and easy to understand.

Except that it's not our will that necessitates actions. That is achieved by information input interacting with the state of the system, neural architecture and memory function. Evidence from neuroscience, etc, etc.....

If I decide that I will eat an apple right now, then I will get an apple and eat it. The intention to eat the apple motivates and directs my actions until the apple is eaten. There is nothing in neuroscience that contradicts this.

Your conglomeration of neuroscience expressions do nothing to contradict this. The "information input" to "the state of my system" is that I feel a need to eat something. That triggers the recall by my "memory function" that I have apples that I can eat and that they are very satisfying. The intention to eat an apple then motivates and directs my subsequent actions as I go to the kitchen, pick out an apple, rinse it and dry it, and then begin eating it. All of these coordinated actions are carried out by my own "neural architecture".

My original expression is totally consistent with the neuroscience. Neuroscience provides a ton of additional details, as to how different parts of the brain contribute to the general function of realizing I am hungry and getting an apple to satisfy that need. But neuroscience does not change the facts as stated by my description. It simply provides additional details.

''Neuroscientists have repeatedly pointed out that pattern recognition represents the key to understanding cognition in humans. Pattern recognition also forms the very basis by which we predict future events, i e. we are literally forced to make assumptions concerning outcomes, and we do so by relying on sequences of events experienced in the past.''

Huettel et al. point out that their study identifies the role various regions of prefrontal cortex play in moment-to-moment processing of mental events in order to make predictions about future events. Thus implicit predictive models are formed which need to be continuously updated, the disruption of sequence would indicate that the PFC is engaged in a novelty response to pattern changes. As a third possible explanation, Ivry and Knight propose that activation of the prefrontal cortex may reflect the generation of hypotheses, since the formulation of an hypothesis is an essential feature of higher-level cognition.''

Exactly. My prior experience with eating apples enabled me to predict that an apple would satisfy my current hunger. So, I set my intent upon eating an apple. That intent motivated and directed my steps to the kitchen to get an apple and then eat it.

Now, if I found that I had no apples in the kitchen, then my PFC would have to deal with the novel situation, and perhaps my PFC would find something else to snack on, or perhaps my PFC would decide to go to the grocery store to buy more apples. The PFC would have to decide what to do.

There's an article on the Prefrontal Cortex in Wikipedia which summarizes its function like this: "The basic activity of this brain region is considered to be orchestration of thoughts and actions in accordance with internal goals." The internal goal in this case would be to satisfy my current hunger. The thoughts would be of the apple and its satisfying properties. The actions would be me walking to the kitchen to get an apple to eat.

The facts of neuroscience have not changed anything. They simply provide a more detailed explanation of how things work, such as the specific brain areas involved in different parts of the operations that result in thoughts and actions and our awareness of internal goals.

By the way, your link to the Springer article is not working. You might want to fix that to avoid frustrating your readers. You may be able to find that article using a google search and then update your link to the working source.

Causal necessity cannot eliminate anything and still remain causal necessity. Remove one of those dominoes from the chain, and the "unfolding of events" simply ceases. Therefore, rule number one is: do not pretend that some events are not happening. All of the events are actually going to happen.

Alternate possibilities are not a feature, attribute or aspect of determinism. There are no possible alternate actions within a deterministic system. There is no place in the given definition of determinism that allows alternate actions. Everything must necessary proceed as determined, no deviations.

So, determinism must remain silent as to alternate possibilities. Determinism may only speak to things that will certainly happen. It may not speak of things that can happen or that could have happened. As soon as determinism opens its mouth about things that do not concern it, it ceases to be determinism, and becomes something else.

Determinism is not allowed to assert that we could not do otherwise, but only that we would not do otherwise. Determinism is ignorant as to what can and cannot happen. These topics exist within the context of possibility, not within the context of necessity. When determinism attempts to speak of possibilities, it makes a silly ass of itself.

Possibilities exist solely within the imagination. They do not exist as actualities in the real world.

The only way that possibilities exist in the real world is as actual mental events produced reliably by physical processes within the brain. And this is where determinism returns to our picture. Each possibility, as a real mental event, will occur by causal necessity, just like every other event.

Thus, determinism does not exclude these physical events, but rather asserts that they will necessarily happen. We will experience them as a series of thoughts and feelings according to the logic of our language that has evolved over millions of years. And thus, they are totally unavoidable. Possibilities will show up within the logical mechanism that reasons and decides.

What Does Deterministic System Mean?
''A deterministic system is a system in which a given initial state or condition will always produce the same results. There is no randomness or variation in the ways that inputs get delivered as outputs.''

Exactly. And each possibility that we consider will appear in our mind exactly as it does, with no randomness or variation, without deviation.

I'm hoping you'll eventually catch on to what I'm saying here.

Necessity is the idea that everything that has ever happened and ever will happen is necessary, and cannot be otherwise. Necessity is often opposed to chance and contingency. In a necessary world there is no chance. Everything that happens is necessitated.''

The first flaw in that statement is the unauthorized use of "cannot" instead of "will not". The use of "cannot" immediately changes the context from necessity to possibility, creating a paradox.

The second flaw is to exclude chance and contingency, which are thoughts that must necessarily occur as part of the causal mechanism as described above. We have evolved these notions to deal rationally with our uncertainties as to what will happen. So, we must logically have notions of what "can" happen and what might happen, just in case our circumstances are not what we think they are.

A lot of things are on the menu, yet according to the definition of determinism, only one option is realizable in any given instance in time; the determined option.

Sorry, but determinism has no clue as to what is possible, or realizable, or any other -ible or -able. Determinism has no knowledge of what can and cannot happen. It only knows about what certainly will happen.

I understand the meaning of possibility ...

Okay. Please demonstrate that understanding by explaining the meaning of possibility.
 
In all cases the machine is functioning to satisfy our will,
Yes, it happens to be doing so. But if our will changes, it's not as if it's will is going to change, even so, without some additional action upon the object.

It has a will, regardless of where that will came from otherwise I might say "you have no will of your own, as in all cases the 'you' is functioning to satisfy 'causal necessity's' will," and so this:
because it has no will of its own
Is contradictory to your own compatibilism.

So, are you a compatibilist or a hard determinist?

I have no problem acknowledging that the computer is a thing incapable of discernment over the will it holds; if you ask it in the right way, it will do anything that may be done by such as it. It is incapable of malice, but it is not incapable of holding a will which is free. It is also not incapable of holding free will: the will to decide for itself what it will do.

Microsoft office? No, it's not got the right regulatory controls built in.

A Dwarf in the context to a running dwarven "world"? Well, yes, those do, but it's not very rich in depth or mutability.

It doesn't have to be very rich in depth or mutability to be such as it is, however.
 
In all cases the machine is functioning to satisfy our will,
Yes, it happens to be doing so. But if our will changes, it's not as if it's will is going to change, even so, without some additional action upon the object.

It has a will, regardless of where that will came from otherwise I might say "you have no will of your own, as in all cases the 'you' is functioning to satisfy 'causal necessity's' will," and so this:
because it has no will of its own
Is contradictory to your own compatibilism.

So, are you a compatibilist or a hard determinist?

I have no problem acknowledging that the computer is a thing incapable of discernment over the will it holds; if you ask it in the right way, it will do anything that may be done by such as it. It is incapable of malice, but it is not incapable of holding a will which is free. It is also not incapable of holding free will: the will to decide for itself what it will do.

Microsoft office? No, it's not got the right regulatory controls built in.

A Dwarf in the context to a running dwarven "world"? Well, yes, those do, but it's not very rich in depth or mutability.

It doesn't have to be very rich in depth or mutability to be such as it is, however.
We have an interest in the outcomes. The program is serving our interests. Neither the program nor causal necessity has any interest in any outcomes. The Dwarf's will is a projection of your own. It's one of those many cases where we apply "theory of mind" inappropriately to inanimate objects or non-intelligent species. And that's why we often do yell at inanimate objects.
 
In all cases the machine is functioning to satisfy our will,
Yes, it happens to be doing so. But if our will changes, it's not as if it's will is going to change, even so, without some additional action upon the object.

It has a will, regardless of where that will came from otherwise I might say "you have no will of your own, as in all cases the 'you' is functioning to satisfy 'causal necessity's' will," and so this:
because it has no will of its own
Is contradictory to your own compatibilism.

So, are you a compatibilist or a hard determinist?

I have no problem acknowledging that the computer is a thing incapable of discernment over the will it holds; if you ask it in the right way, it will do anything that may be done by such as it. It is incapable of malice, but it is not incapable of holding a will which is free. It is also not incapable of holding free will: the will to decide for itself what it will do.

Microsoft office? No, it's not got the right regulatory controls built in.

A Dwarf in the context to a running dwarven "world"? Well, yes, those do, but it's not very rich in depth or mutability.

It doesn't have to be very rich in depth or mutability to be such as it is, however.
We have an interest in the outcomes. The program is serving our interests. Neither the program nor causal necessity has any interest in any outcomes. The Dwarf's will is a projection of your own. It's one of those many cases where we apply "theory of mind" inappropriately to inanimate objects or non-intelligent species. And that's why we often do yell at inanimate objects.
You mistake that a direct self-sourced interest in outcomes is necessary to hold a will or for that will to be freely held or to be free.

If you ask someone to kill someone else and they say yes, even if they always say yes, just because they can, it's still their will to kill someone too, freely held by them.

Interest in outcome is not necessary. Just an interest in "doing the thing" which may be the automatic interest: it's just what they do.

Otherwise we would have no justification to lock up the troublemaker who will do literally any thing anyone asks.
 
Last edited:
You mistake that a direct self-sourced interest in outcomes is necessary to hold a will or for that will to be freely held or to be free.

Morality evolves logically from life. The distinction between living and non-living is need. The need for air, water, and food animates the living organism to seek ways to satisfy that need. We call something "good" if it meets a real need that we have as an individual, as a society, or as a species. We call something "bad" if it prevents us from meeting a real need or if it harms us unnecessarily.

The programmed Dwarf has no needs to satisfy, that's why it has no interests in any consequences. You have needs. And you care about consequences. You have an interest in the consequences of your choices. Thus, you choose the option with the best consequences.

We could, in theory, build a robot with an interest in the consequences of its choices. That would give it a criteria by which to make its choices.
 
You mistake that a direct self-sourced interest in outcomes is necessary to hold a will or for that will to be freely held or to be free.

Morality evolves logically from life. The distinction between living and non-living is need. The need for air, water, and food animates the living organism to seek ways to satisfy that need. We call something "good" if it meets a real need that we have as an individual, as a society, or as a species. We call something "bad" if it prevents us from meeting a real need or if it harms us unnecessarily.

The programmed Dwarf has no needs to satisfy, that's why it has no interests in any consequences. You have needs. And you care about consequences. You have an interest in the consequences of your choices. Thus, you choose the option with the best consequences.

We could, in theory, build a robot with an interest in the consequences of its choices. That would give it a criteria by which to make its choices.
No, morality evolves from general game theory.

As it is, the programmed dwarf does have needs. It needs to process something in a way that is called "eating", it needs to undergo some process called "sleeping", it needs to "do work" on occasion.

If it doesn't do these things, and there are situations which would prevent them from being attainable, he will starve, dehydrate, or even go insane.

I could change that, but the point is, I'm not going to, because I need him to be a thing, for the moment, with needs... If only to make my point here.

When I am putting on a dwarf mask, I have to satisfy those needs otherwise I die.

So does the dwarf.

I don't think you're realizing quite how much work went into making this dwarf happen to be what it is, or how much it is.

The whole reason it's deciding to fight is because it really wants to be with family, but his only family died in a tree felling accident some seasons ago, and so his FIGHT need is the only one he can actually satisfy regularly.

Even so, he's really depressed.

The whole situation is kinda fucked up.

There's a reason I don't actually play that game anymore.
 
jarhyn said:
specific computers are the same regardless of who is watching the specific computer
"Harry that Tom sees is exactly the same Harry that Bob sees".

This is acknowledgment of the axiom of universal objectivity and naturalism in general in general: that material things observed are the same thing regardless of who is observing them.
Yet I mentioned at least three types of computer observed
"Harry is not the same object as observed by both Bob and Tom because Harry is different from Bob is different from Tom"

This is nonsense, however. It conflates the difference of opinions held by Bob and Tom of Harry as making Harry anything less than an object, the same as seen both by Bob and Tom regardless of their differing opinions of Harry

Now replace Harry with "the computer in my living room, that one specific object".

Or any specific object at all.

You can't say a rock, person, computer, any given thing is not an object, or somehow different from itself just because different objects are different.

Or I guess you can, but when you do you invoke nonsense.

Given that you step off from abject nonsense, an observably false starting point and one I cannot possibly believe you don't see, it makes me wonder what your game is here.
<removed> You've gone from computers to Bobs. Your argument still depends on self reference for everything. Self is not at the center of things. Things are at the center of things.

<removed>  Scientific method

The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction (the inference of a general law from particular instances: Often contrasted with deduction), based on such observations; experimental and measurement-based testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3]
 
Last edited by a moderator:
  • Like
Reactions: DBT
@FDI, I am referencing an object and have been, as computers are objects.

It's cute that you want to stamp your feet and play stupid games where you claim computers are not objects, but the fact is, each individual computer, rock, every person in this world is an object, in addition to whatever images those objects may project or contain.

Every material thing in this world is an object.

The planet itself is a very large, complicated object.

So is the solar system.

So is the cat on my lap.

So is the computer in my office.

The latter of these things is an object which is observably in the business of containing a dwarf, also an object (formed of charge patterns among the larger object of which it is a part), with observable properties, one of those properties being some thing that is exactly the machinery that is set up to "open" a "door", and which is set up to check whether the door is open.

It will either succeed or fail at that. It will always find itself there.

And deterministically, he will fail.

And deterministically I can say, without any contradictions or nonsense, "his will to open the door was constrained (not free)"

Marvin, on reading this statement, would be able to understand "he had a series of instructions unto a requirement"; "One of the instructions is to open a door"; "when he executed on this instruction, the door did not open".

It encodes at least these three objective facts about the objects in the system.
 
Free will is when our choosing is free of coercion and other forms of undue influence. Nothing more. Nothing less.

Still an assertion.

It is an assertion backed up by the three dictionaries I quoted. But don't worry, your definition is found there too, in second place.

Dictionaries merely express word usage. Common reference doesn't prove the proposition. God is in the dictionary, Satan and the angels and demons are in the dictionary.

A lot of stuff that doesn't exist is in the dictionary. If a dictionary could be used to resolve the free will debate, it would have ended long ago.

A misapplied label.

The label is applied as defined. If we see what is commonly understood to be a cat, then we call it a "cat". If we see a dog we call it a "dog".
And when we see someone deciding for themselves what they will do, while free of coercion and undue influence, we call it "free will". It is all very straightforward.

Some things are labelled miracles, the work of the Lord. People slap false labels onto lots of things, Mohammad is the prophet of God, Jesus is the Saviour......free will is compatible with determinism, a system where will has no agency...

Machines that can make rational decisions based on sets of criteria. The criteria determining the action taken.... in principle just as a human brain does, without coercion or force, able to beat master chess players of its own accord, its makeup and function as an information processor....which is the evolved function of a brain.

The machine plays chess because that's what we programmed it to do. The microwave oven cooks our food because that's what we built it to do. In all cases the machine is functioning to satisfy our will, because it has no will of its own.

The brain plays chess because it has the capacity (neural architecture) and has acquired the necessary information. It matters not how the input is acquired, it's the inherent state of the system that determines ability.

We don't choose our brain, its abilities or its features. Some people suck at chess.

When a machine starts acting as if it had a will of its own, we usually call someone to repair it.

When the brain breaks down we call someone, a doctor, to treat the condition.



Neither operate on the principle of free will.

Free will is not an operating principle. Free will simply describes the conditions of the choosing operation: Was coerced or free of coercion? Was it unduly influenced or free of undue influence? This is really simple stuff and easy to understand.

If will has no agency, cannot regulate brain activity or make a difference to outcomes or behaviour, it is not 'free will' regardless of how many times it's asserted.



Except that it's not our will that necessitates actions. That is achieved by information input interacting with the state of the system, neural architecture and memory function. Evidence from neuroscience, etc, etc.....

If I decide that I will eat an apple right now, then I will get an apple and eat it. The intention to eat the apple motivates and directs my actions until the apple is eaten. There is nothing in neuroscience that contradicts this.

Your brain decides before you the conscious entity, a construct of the brain, is aware of the decision, the action is brought to mind milliseconds after initiation.

It is information processing, not free will.

Your conglomeration of neuroscience expressions do nothing to contradict this. The "information input" to "the state of my system" is that I feel a need to eat something. That triggers the recall by my "memory function" that I have apples that I can eat and that they are very satisfying. The intention to eat an apple then motivates and directs my subsequent actions as I go to the kitchen, pick out an apple, rinse it and dry it, and then begin eating it. All of these coordinated actions are carried out by my own "neural architecture".

The intention to eat the apple was not freely willed. It's the tail end of a long process that began before the decision and action took place.

We are talking about determinism, where all actions are necessitated, not freely willed.

My original expression is totally consistent with the neuroscience. Neuroscience provides a ton of additional details, as to how different parts of the brain contribute to the general function of realizing I am hungry and getting an apple to satisfy that need. But neuroscience does not change the facts as stated by my description. It simply provides additional details.

There is no original expression, each and every 'expression/action' is a consequence of its prior state which morphs into current state, which morphs into the future state of the system.

That's determinism.

No escape clause.


''Neuroscientists have repeatedly pointed out that pattern recognition represents the key to understanding cognition in humans. Pattern recognition also forms the very basis by which we predict future events, i e. we are literally forced to make assumptions concerning outcomes, and we do so by relying on sequences of events experienced in the past.''

Huettel et al. point out that their study identifies the role various regions of prefrontal cortex play in moment-to-moment processing of mental events in order to make predictions about future events. Thus implicit predictive models are formed which need to be continuously updated, the disruption of sequence would indicate that the PFC is engaged in a novelty response to pattern changes. As a third possible explanation, Ivry and Knight propose that activation of the prefrontal cortex may reflect the generation of hypotheses, since the formulation of an hypothesis is an essential feature of higher-level cognition.''

Exactly. My prior experience with eating apples enabled me to predict that an apple would satisfy my current hunger. So, I set my intent upon eating an apple. That intent motivated and directed my steps to the kitchen to get an apple and then eat it.

The information that is your prior experience evolves into your current action, which evolves into future actions. At no point do you freely choose an action. One action evolves into the next within an intricate weave of causality.


Now, if I found that I had no apples in the kitchen, then my PFC would have to deal with the novel situation, and perhaps my PFC would find something else to snack on, or perhaps my PFC would decide to go to the grocery store to buy more apples. The PFC would have to decide what to do.


Whatever you do is a matter of interaction of information, no apples in the kitchen and the desire to eat an apple compels you to go to the store, you must have an apple, the desire is strong. You drive to the store only to discover there are no apples, what then?

The computes this information, according to its evolutionary role, and you realize that you can satisfy your craving with apple juice, a compromise.

The brain acts in relation to its circumstances, inputs in concert with memory.


There's an article on the Prefrontal Cortex in Wikipedia which summarizes its function like this: "The basic activity of this brain region is considered to be orchestration of thoughts and actions in accordance with internal goals." The internal goal in this case would be to satisfy my current hunger. The thoughts would be of the apple and its satisfying properties. The actions would be me walking to the kitchen to get an apple to eat.

The facts of neuroscience have not changed anything. They simply provide a more detailed explanation of how things work, such as the specific brain areas involved in different parts of the operations that result in thoughts and actions and our awareness of internal goals.

By the way, your link to the Springer article is not working. You might want to fix that to avoid frustrating your readers. You may be able to find that article using a google search and then update your link to the working source.

Causal necessity cannot eliminate anything and still remain causal necessity. Remove one of those dominoes from the chain, and the "unfolding of events" simply ceases. Therefore, rule number one is: do not pretend that some events are not happening. All of the events are actually going to happen.

Put simply, causal necessity does not equate to freedom of will. The very opposite in fact. What is necessitated by elements beyond the control of will, is not freely willed.
Alternate possibilities are not a feature, attribute or aspect of determinism. There are no possible alternate actions within a deterministic system. There is no place in the given definition of determinism that allows alternate actions. Everything must necessary proceed as determined, no deviations.

So, determinism must remain silent as to alternate possibilities. Determinism may only speak to things that will certainly happen. It may not speak of things that can happen or that could have happened. As soon as determinism opens its mouth about things that do not concern it, it ceases to be determinism, and becomes something else.

Determinism is not allowed to assert that we could not do otherwise, but only that we would not do otherwise. Determinism is ignorant as to what can and cannot happen. These topics exist within the context of possibility, not within the context of necessity. When determinism attempts to speak of possibilities, it makes a silly ass of itself.

Possibilities exist solely within the imagination. They do not exist as actualities in the real world.

The only way that possibilities exist in the real world is as actual mental events produced reliably by physical processes within the brain. And this is where determinism returns to our picture. Each possibility, as a real mental event, will occur by causal necessity, just like every other event.

Thus, determinism does not exclude these physical events, but rather asserts that they will necessarily happen. We will experience them as a series of thoughts and feelings according to the logic of our language that has evolved over millions of years. And thus, they are totally unavoidable. Possibilities will show up within the logical mechanism that reasons and decides.

Mental events are physical events. Electrochemical events. What we imagine or think is a physical activity that is equally subject to the deterministic activity of the physical world as everything else.

What we imagine is a mental rearrangement of pattern recognition, if it wasn't for this, that could have happened.

Unfortunately, there is no possibility of ''if it wasn't for this'' and 'that' could never have happened.

A common daydream being, 'if only I could go back in time knowing what I know now' - knowledge that would change the system and produce different outcomes, more informed decisions, avoiding pitfalls and errors....a nice little fantasy.
 
We don't choose our brain, its abilities or its features. Some people suck at chess.
But we do choose  aspects of our brain,  some of it's abilities, some of it's features. Sometimes we study chess and get better at chess.

We get good at what we practice, and there are many things that folks practice. Of those things (and also of the things of our imaginations), many of us selected a subset, a choice function operated by our brain, of our brain, so our brain choosing, so us choosing which of those many would be the specific ones which we practiced.

And there are many things besides practice which operate thus, as a function of the brain's choices, and so our choices, which determine pieces of how we think, our abilities, and objective features of our brain.

Sometimes the feature is "this neuron right here activates .05 second longer than it used to". It doesn't have to be more than that to be a real feature decided upon by us. Even so, we know it more by the impact it has on the phenomena that is "how we think" directly, rather than through the observation of a modification of timing biases. And that's OK, because one implies the other; our experience couldn't change if our neurons didn't change in some way!
 
@FDI, I am referencing an object and have been, as computers are objects.

It's cute that you want to stamp your feet and play stupid games where you claim computers are not objects, but the fact is, each individual computer, rock, every person in this world is an object, in addition to whatever images those objects may project or contain.

Every material thing in this world is an object.

The planet itself is a very large, complicated object.

So is the solar system.

So is the cat on my lap.

So is the computer in my office.

The latter of these things is an object which is observably in the business of containing a dwarf, also an object (formed of charge patterns among the larger object of which it is a part), with observable properties, one of those properties being some thing that is exactly the machinery that is set up to "open" a "door", and which is set up to check whether the door is open.

It will either succeed or fail at that. It will always find itself there.

And deterministically, he will fail.

And deterministically I can say, without any contradictions or nonsense, "his will to open the door was constrained (not free)"

Marvin, on reading this statement, would be able to understand "he had a series of instructions unto a requirement"; "One of the instructions is to open a door"; "when he executed on this instruction, the door did not open".

It encodes at least these three objective facts about the objects in the system.
All that's missing on computers from you is a single reference which doesn't relate computers to you or your senses. When you manage to disassociate yourself from computers in your specifications of them as objects you might get my attention. Relating computers to yourself and calling the result objective or object is just plain silly. If you are included in the specification then the result is subjective or subject.

Please read the bolded stuff in the scientific method bit I posted before you embarrass yourself any further.

Until then. Yawn.
 
We don't choose our brain, its abilities or its features. Some people suck at chess.


Sometimes the feature is "this neuron right here activates .05 second longer than it used to". It doesn't have to be more than that to be a real feature decided upon by us. Even so, we know it more by the impact it has on the phenomena that is "how we think" directly, rather than through the observation of a modification of timing biases. And that's OK, because one implies the other; our experience couldn't change if our neurons didn't change in some way!
First neurons always activate when they are stimulated that's part of their design. So they don't actually activate they process. Neurons aren't like switches we just model them in some cases as acting so.

We know neurons are active in several ways.

We detect when they change rates of uptake and disposal of ATP products which we detect using oxygen signaling technology such as MRI, and by variations in cell surface and interior electrochemical behavior using external and internal electrical correlate activity.

There are at least three metabolic (biochemical) processes ongoing in neurons which we selectively use in the study of their behavior.

Neural signaling is regulated by between and within channel electrochemical biasing change in relative+/- or transmitter substance activity.

Etc.

As for whatever you think you are talking about it probably isn't that way at all. That 'change is some way' wave is just that, it's 'I dunno, so I can say anything I want to say.'

BS in BS out. I know it's 'I dunno' because I've studied or gone through architype, conditioning, drives, self, needs, desires eras of psychobabble.
 
  • Like
Reactions: DBT
Back
Top Bottom