• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What is free will?

First you won't watch videos, then you aren't talking about measurement as she and I put it. Now you're complaining that we aren't responding to your setup which isn't answer to anything she or I wrote.

Result" -nm
Hey man, I'm very clear that if you can't provide text, I'm not going to bother.

Measurement in these two situations are not even doing the same kind of ascertainment.

In one situation, we're looking at REAL INSTANTANEOUS FIELD VALUES.

In the other we are modifying field values and making inference.

The reason you get different results is because you are not measuring the same thing.

It is not the method that changes things per se, but the thing you are trying to measure between these two techniques, and that's why you get different results when you're, say, the god of the universe looking at the bitfield as opposed to a bald primate POKING the bitfield.

You are measuring the poke, not the bitfield.

I am talking measuring the bitfield itself
Again. Not responsive. I provided text. What we measure determines what is measured. Whether physics depends on small things isn't what we measure. It's what small things we measure. Do we measure situational observations of interference or do we measure what interference causes. To wit from the article: "A 'measure' is what’s necessary to assign weights to the elements of a set. For all practical purposes it’s the same as a probability distribution." In your words the observation of a partial interference pattern is a clown act which, explained as coherent, is fictional chatter.
I don't think you really understand the difference that I'm talking about.

As I said, the thing you wish to measure has an immediate value. You cannot measure the speed and position of a particle simultaneously. But "a god" can. In fact I do it all the time in my stupid little games.

Essentially one is operating on after-the-fact causal inference and the other is operating on immediate state in such a way that nothing in the system is changed. Of course this requires the ability to stop time and look at field values without causing any effect on them.

This is the stage we imagine ourselves on in a mode of superdeterminism: able to see the dice rolls without causing one to tumble out of the roller.

Two actors cannot believe different things from the same evidence and both be right. So one of us must be wrong about whether measuring in different ways measures different things.

My money is that measuring in ways that yield different numbers measure different things.
You last statement is true. That is our point. Measuring something that leads to a partial answer is not a good measurement. Obviously we aren't interested in knowing what we already now beyond confirming what we know is accurate. The partial measurement isn't a good experiment. Such becomes obvious when we change perspectives and we get the rest of the answer in the second experiment.

The reason you can measure more than one thing at a time is that you 'know' the state of each of the other things is because you've defined them in your model aka 'experiment'. That is not an experiment. That is a verification report on something known.
 
  • Like
Reactions: DBT
First you won't watch videos, then you aren't talking about measurement as she and I put it. Now you're complaining that we aren't responding to your setup which isn't answer to anything she or I wrote.

Result" -nm
Hey man, I'm very clear that if you can't provide text, I'm not going to bother.

Measurement in these two situations are not even doing the same kind of ascertainment.

In one situation, we're looking at REAL INSTANTANEOUS FIELD VALUES.

In the other we are modifying field values and making inference.

The reason you get different results is because you are not measuring the same thing.

It is not the method that changes things per se, but the thing you are trying to measure between these two techniques, and that's why you get different results when you're, say, the god of the universe looking at the bitfield as opposed to a bald primate POKING the bitfield.

You are measuring the poke, not the bitfield.

I am talking measuring the bitfield itself
Again. Not responsive. I provided text. What we measure determines what is measured. Whether physics depends on small things isn't what we measure. It's what small things we measure. Do we measure situational observations of interference or do we measure what interference causes. To wit from the article: "A 'measure' is what’s necessary to assign weights to the elements of a set. For all practical purposes it’s the same as a probability distribution." In your words the observation of a partial interference pattern is a clown act which, explained as coherent, is fictional chatter.
I don't think you really understand the difference that I'm talking about.

As I said, the thing you wish to measure has an immediate value. You cannot measure the speed and position of a particle simultaneously. But "a god" can. In fact I do it all the time in my stupid little games.

Essentially one is operating on after-the-fact causal inference and the other is operating on immediate state in such a way that nothing in the system is changed. Of course this requires the ability to stop time and look at field values without causing any effect on them.

This is the stage we imagine ourselves on in a mode of superdeterminism: able to see the dice rolls without causing one to tumble out of the roller.

Two actors cannot believe different things from the same evidence and both be right. So one of us must be wrong about whether measuring in different ways measures different things.

My money is that measuring in ways that yield different numbers measure different things.
You last statement is true. That is our point. Measuring something that leads to a partial answer is not a good measurement.

The reason you can measure more than one thing at a time is that you 'know' the state of each of the other things is because you've defined them in your model aka 'experiment'. That is not an experiment. That is a verification report on something known.
Aaand... you devolved to word salad at the end there.

So, I spent my weekend reproducing some of the most complicated work done by Riemann, Euler, and Heaviside alone, in my fucking spare time.

Honestly, if you think I'm just "some armchair dweeb who probably doesn't know what he's talking about as relates the ideas 'determinism' 'systems theory' and 'graph theory'" you may have to reconsider that.

I acknowledge that I can be wrong.

It's why I don't go around claiming wildly often that I for sure know the answer to life the universe and everything.

It's why I read at least some of the words you write until you go off the rails with the conflation between libertarian free will and °°° •••.

I'm discussing not something that leads to a partial answer. I'm talking about absolute answers.

The thing is, I don't think you can put yourself in the perspective to actually understand what superdeterminism means.

I had a whole post about how all stochastic systems are actually representable in "just so determinism". That is Sabine's "superdeterminism".

It poses zero threat to the systems theoretic consideration of free will. My dwarfy universe is superdeterministic! I've explained it several times and my guess is you usually just go glass-eyed and slack jawed as your eyes smoothly pass over what is seemingly unassailable, complicated text full of heavily abstract concepts like °°° and ••• and doing so in a way that entirely steps past your red herring of NeUrOsCiEnCE.

Still, the dwarves observably have °°° and •••, which was has been pointed out are concretely observable non-arbitrary properties, indelible eternal features of that universe and so clearly available in superdeterministic systems, and understandable as concepts (and useful ones) for the things in deterministic universes to have and to hold understanding of respectively of °°° and •••.

It is my contention that these concepts °°° and ••• are what most people are thinking about, if sloppily, when discussing "free" and "will"

If you ask me very nicely I can get again post the essay on why this works.
 
First you won't watch videos, then you aren't talking about measurement as she and I put it. Now you're complaining that we aren't responding to your setup which isn't answer to anything she or I wrote.

Result" -nm
Hey man, I'm very clear that if you can't provide text, I'm not going to bother.

Measurement in these two situations are not even doing the same kind of ascertainment.

In one situation, we're looking at REAL INSTANTANEOUS FIELD VALUES.

In the other we are modifying field values and making inference.

The reason you get different results is because you are not measuring the same thing.

It is not the method that changes things per se, but the thing you are trying to measure between these two techniques, and that's why you get different results when you're, say, the god of the universe looking at the bitfield as opposed to a bald primate POKING the bitfield.

You are measuring the poke, not the bitfield.

I am talking measuring the bitfield itself
Again. Not responsive. I provided text. What we measure determines what is measured. Whether physics depends on small things isn't what we measure. It's what small things we measure. Do we measure situational observations of interference or do we measure what interference causes. To wit from the article: "A 'measure' is what’s necessary to assign weights to the elements of a set. For all practical purposes it’s the same as a probability distribution." In your words the observation of a partial interference pattern is a clown act which, explained as coherent, is fictional chatter.
I don't think you really understand the difference that I'm talking about.

As I said, the thing you wish to measure has an immediate value. You cannot measure the speed and position of a particle simultaneously. But "a god" can. In fact I do it all the time in my stupid little games.

Essentially one is operating on after-the-fact causal inference and the other is operating on immediate state in such a way that nothing in the system is changed. Of course this requires the ability to stop time and look at field values without causing any effect on them.

This is the stage we imagine ourselves on in a mode of superdeterminism: able to see the dice rolls without causing one to tumble out of the roller.

Two actors cannot believe different things from the same evidence and both be right. So one of us must be wrong about whether measuring in different ways measures different things.

My money is that measuring in ways that yield different numbers measure different things.
You last statement is true. That is our point. Measuring something that leads to a partial answer is not a good measurement.

The reason you can measure more than one thing at a time is that you 'know' the state of each of the other things is because you've defined them in your model aka 'experiment'. That is not an experiment. That is a verification report on something known.
Aaand... you devolved to word salad at the end there.

So, I spent my weekend reproducing some of the most complicated work done by Riemann, Euler, and Heaviside alone, in my fucking spare time.

Honestly, if you think I'm just "some armchair dweeb who probably doesn't know what he's talking about as relates the ideas 'determinism' 'systems theory' and 'graph theory'" you may have to reconsider that.

I acknowledge that I can be wrong.

It's why I don't go around claiming wildly often that I for sure know the answer to life the universe and everything.

It's why I read at least some of the words you write until you go off the rails with the conflation between libertarian free will and °°° •••.

I'm discussing not something that leads to a partial answer. I'm talking about absolute answers.

The thing is, I don't think you can put yourself in the perspective to actually understand what superdeterminism means.

I had a whole post about how all stochastic systems are actually representable in "just so determinism". That is Sabine's "superdeterminism".

It poses zero threat to the systems theoretic consideration of free will. My dwarfy universe is superdeterministic! I've explained it several times and my guess is you usually just go glass-eyed and slack jawed as your eyes smoothly pass over what is seemingly unassailable, complicated text full of heavily abstract concepts like °°° and ••• and doing so in a way that entirely steps past your red herring of NeUrOsCiEnCE.

Still, the dwarves observably have °°° and •••, which was has been pointed out are concretely observable non-arbitrary properties, indelible eternal features of that universe and so clearly available in superdeterministic systems, and understandable as concepts (and useful ones) for the things in deterministic universes to have and to hold understanding of respectively of °°° and •••.

It is my contention that these concepts °°° and ••• are what most people are thinking about, if sloppily, when discussing "free" and "will"

If you ask me very nicely I can get again post the essay on why this works.
Lovely. Yet again so post.
 
Freewill is the absence of compulsion to do as one pleases.
Free will is when a person decides for themselves what they will do, while free of coercion and other forms of undue influence.
The 2nd element, that you had not yet mentioned, was choosing. You mentioned the 1st element, "wanting to do something". There is a distinction between a "want" and a "will". We may have multiple wants, but from among these wants we choose what we will do about them.

The will is a specific intent for the immediate (I will have an apple now) or distant (last will and testament) future. The chosen intent motivates and directs our subsequent actions until the intent is satisfied.
 
One aspect of compulsion worthy of being mentioned can be when it's thought of as a force that can be overcome. If I don't want to do something but decide to do it anyway because of an outside compelling force, that I could possibly overcome that force is not good reason to not consider the force a compelling force. In such a circumstance, an action is not an action of ones own free will despite it being ones decided choice.
A compulsion that one can overcome is not usually considered a compulsion. When assessing a person's responsibility for their actions, a compulsion that one cannot reasonably be expected to overcome excuses the person's choosing from being blamed, instead the compulsion is blamed, and would be treated medically and psychiatrically.
 
Freewill is the absence of compulsion to do as one pleases.
Is moral considerations a compulsion?
Is knowledge a compulsion?
Is habit a compulsion?

What does ”pleases” really mean?

Is free will really to act without considering consequences?
Excellent points, but our morality and our knowledge are integral parts of us. Free will does not require that we ever be "free from ourselves". It pleases the moral person that their moral considerations influence their choices. It pleases the knowledgeable person that their knowledge influences their choices.

Choosing for ourselves what we will do always involves considering the consequences of our options.
 
I think that you first need to establish what you think it means to "make a choice".

Choosing may be described as an operation that (1) inputs two or more options, (2) applies some appropriate criteria for comparative evaluation, and based on that evaluation, (3) outputs a single choice, usually in the form of "I will X", where X is the thing we will do.

Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.

Machines are tools that we create to help us accomplish our own will. They have no will of their own. They literally "have no skin in the game". If a machine begins acting as if it had a will of its own, we'd call a repairman.

I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.

Machines have no interest in the outcomes. Only biological organisms have an interest in the outcomes, because they can suffer, and they can die. So, evolution has provided organisms with biological drives that enhance the likelihood that they will survive and reproduce. Take hunger, for example. There were no doubt variations in our earlier species that lacked a sense of hunger and quickly went extinct.

As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will.

Machines lack a will of their own. Lacking a will makes free will non-applicable to machines.

The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".

No no no. Free will is a secular issue that is tied to assessing responsibility. If you did something bad because you chose to, then you are subject to correction. If you did something bad because someone forced you against your will, then you are innocent and the person who coerced you is held responsible. If you did something bad because of a significant mental illness, such as one that subjected you to hallucinations and delusions, or subjected you to an irresistible impulse, or that simply impaired your ability to reason, they you are innocent and the mental illness is held responsible and is subject to correction by medical or psychiatric treatment.

So, the notion of free will existed prior to the point where it was adopted by theists to give their omnipotent and omniscient God a "get-out-of-jail free card". If God restrained himself from interfering in your choices then it was claimed that you were responsible for your actions. The problem is that if God is given omnipotence and omniscience, then he also becomes omni-responsible.
 
I think that you first need to establish what you think it means to "make a choice".

Choosing may be described as an operation that (1) inputs two or more options, (2) applies some appropriate criteria for comparative evaluation, and based on that evaluation, (3) outputs a single choice, usually in the form of "I will X", where X is the thing we will do.

Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.

Machines are tools that we create to help us accomplish our own will. They have no will of their own. They literally "have no skin in the game". If a machine begins acting as if it had a will of its own, we'd call a repairman.

I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.

Machines have no interest in the outcomes. Only biological organisms have an interest in the outcomes, because they can suffer, and they can die. So, evolution has provided organisms with biological drives that enhance the likelihood that they will survive and reproduce. Take hunger, for example. There were no doubt variations in our earlier species that lacked a sense of hunger and quickly went extinct.

As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will.

Machines lack a will of their own. Lacking a will makes free will non-applicable to machines.

The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".

No no no. Free will is a secular issue that is tied to assessing responsibility. If you did something bad because you chose to, then you are subject to correction. If you did something bad because someone forced you against your will, then you are innocent and the person who coerced you is held responsible. If you did something bad because of a significant mental illness, such as one that subjected you to hallucinations and delusions, or subjected you to an irresistible impulse, or that simply impaired your ability to reason, they you are innocent and the mental illness is held responsible and is subject to correction by medical or psychiatric treatment.

So, the notion of free will existed prior to the point where it was adopted by theists to give their omnipotent and omniscient God a "get-out-of-jail free card". If God restrained himself from interfering in your choices then it was claimed that you were responsible for your actions. The problem is that if God is given omnipotence and omniscience, then he also becomes omni-responsible.

I think that you first need to establish what you think it means to "make a choice".

Choosing may be described as an operation that (1) inputs two or more options, (2) applies some appropriate criteria for comparative evaluation, and based on that evaluation, (3) outputs a single choice, usually in the form of "I will X", where X is the thing we will do.

Machines can be programmed with a list of goals and a set of priorities. When goals conflict, they can calculate the likely outcomes of choices and which outcomes best satisfy priorities. They can also be programmed to adjust future priorities on the basis of trial and error. That is because AI programmers deliberately design choice-making programs to mimic human thought processes.

Machines are tools that we create to help us accomplish our own will. They have no will of their own. They literally "have no skin in the game". If a machine begins acting as if it had a will of its own, we'd call a repairman.

I've been trying to make sense of your argument here, but I can't quite seem to grasp the logic. How does language have anything at all with making a choice? You seem to be saying that machines cannot deal with uncertainty, but that is exactly what robots have to deal with. I've seen robots navigate their way through obstacle courses that they've never encountered before. Sometimes they have to go over obstacles, sometimes around them, and sometimes under them. They make choices under unpredictable circumstances.

Machines have no interest in the outcomes. Only biological organisms have an interest in the outcomes, because they can suffer, and they can die. So, evolution has provided organisms with biological drives that enhance the likelihood that they will survive and reproduce. Take hunger, for example. There were no doubt variations in our earlier species that lacked a sense of hunger and quickly went extinct.

As you know, I am a linguist, and I have a great deal of experience with word meanings. I still don't understand how language is relevant to your argument that machines cannot have free will.

Machines lack a will of their own. Lacking a will makes free will non-applicable to machines.

The point is that the free will debate started with trying to justify the righteousness of an omniscient deity assigning blame to human actions. If God knows everything his creations will do in an absolute sense, then how can he hold them accountable for actions that he enabled by the act of creation? Accountability is an essential underlying component of the meaning of "free will".

No no no. Free will is a secular issue that is tied to assessing responsibility. If you did something bad because you chose to, then you are subject to correction. If you did something bad because someone forced you against your will, then you are innocent and the person who coerced you is held responsible. If you did something bad because of a significant mental illness, such as one that subjected you to hallucinations and delusions, or subjected you to an irresistible impulse, or that simply impaired your ability to reason, they you are innocent and the mental illness is held responsible and is subject to correction by medical or psychiatric treatment.

So, the notion of free will existed prior to the point where it was adopted by theists to give their omnipotent and omniscient God a "get-out-of-jail free card". If God restrained himself from interfering in your choices then it was claimed that you were responsible for your actions. The problem is that if God is given omnipotence and omniscience, then he also becomes omni-responsible.

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
 

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
So, to be fair here, maybe he's trying to engage folks in his topic of interest who have had past interest in the hopes of getting people in here who aren't DBT and FDI and Steve (and who aren't me, necessarily, seeing as I have a way with being complicated.)

I don't really blame him all that much, myself?

It's a fun topic (at least for me?) and discussing it has forced me myself to think of neural (and transistor) structures clearly in ways I had only been expecting vaguely might be possible previously.

The other day, the discussion on attention and how we "pay attention" as far as "the other free will thread, specifically on Compatibilism" made me really think about it on a mechanical level how neurons can be arranged to accomplish those functions.

That and a fun HTM NAND primitive.
 

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
So, to be fair here, maybe he's trying to engage folks in his topic of interest who have had past interest in the hopes of getting people in here who aren't DBT and FDI and Steve (and who aren't me, necessarily, seeing as I have a way with being complicated.)

I don't really blame him all that much, myself?

It's a fun topic (at least for me?) and discussing it has forced me myself to think of neural (and transistor) structures clearly in ways I had only been expecting vaguely might be possible previously.

The other day, the discussion on attention and how we "pay attention" as far as "the other free will thread, specifically on Compatibilism" made me really think about it on a mechanical level how neurons can be arranged to accomplish those functions.

That and a fun HTM NAND primitive.

At this point, I feel that I would only be repeating things I have said in the past. I have already pretty much endorsed Marvin's side of the argument, and our minor differences don't seem worth making a big deal over. Philosophers debate the question of whether there could be such a creature as a  philosophical zombie, i.e. a being that is physically identical to a human being but lacks free will, qualia, or sentience. Scott Adams, the creator of the Dilbert comic strip, calls them "moist robots". My view is that all robots, moist or not, are capable in principle of having what we call free will, qualia, and sentience. The material out of which one builds such robots does not matter. There is nothing magical about flesh and blood such that machines equipped with sensors and actuators could not be given the same properties. It's just that we are a very long way from building such machines and may never actually get there. Brains are essentially analog computing devices that operate without software. Everything is hardwired.
 
Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.

I am aware that you were responding to Fast. I thought I had some better responses than he did, and so threw in my 2 cents worth. Generally I found Fast to have some key insights into the problems. But there were a few areas where I thought my own insights might be helpful as well.
 

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
So, to be fair here, maybe he's trying to engage folks in his topic of interest who have had past interest in the hopes of getting people in here who aren't DBT and FDI and Steve (and who aren't me, necessarily, seeing as I have a way with being complicated.)

I don't really blame him all that much, myself?

It's a fun topic (at least for me?) and discussing it has forced me myself to think of neural (and transistor) structures clearly in ways I had only been expecting vaguely might be possible previously.

The other day, the discussion on attention and how we "pay attention" as far as "the other free will thread, specifically on Compatibilism" made me really think about it on a mechanical level how neurons can be arranged to accomplish those functions.

That and a fun HTM NAND primitive.

If you're interested in the problem of attention, Michael Graziano's "Consciousness and the Social Brain" would be helpful. He describes conscious awareness as a data set that tracks the brain's attention function. It's a cool theory from the standpoint of both neuroscience and computer science.
 

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
So, to be fair here, maybe he's trying to engage folks in his topic of interest who have had past interest in the hopes of getting people in here who aren't DBT and FDI and Steve (and who aren't me, necessarily, seeing as I have a way with being complicated.)

I don't really blame him all that much, myself?

It's a fun topic (at least for me?) and discussing it has forced me myself to think of neural (and transistor) structures clearly in ways I had only been expecting vaguely might be possible previously.

The other day, the discussion on attention and how we "pay attention" as far as "the other free will thread, specifically on Compatibilism" made me really think about it on a mechanical level how neurons can be arranged to accomplish those functions.

That and a fun HTM NAND primitive.

If you're interested in the problem of attention, Michael Graziano's "Consciousness and the Social Brain" would be helpful. He describes conscious awareness as a data set that tracks the brain's attention function. It's a cool theory from the standpoint of both neuroscience and computer science.
I'm personally interested more in the structural and mechanical description of how system "pays attention" to a whole "buffer" in parallel.

It essentially involves nerves recurring back towards the layer just past the buffer layer which then potentiate as a whole selection layer, and then a number of different buffers are all able to present to the selection layer, sometimes at the same time sometimes different times, or for different periods of time.

The result is sometimes you switch in "buffer A", and sometimes you switch in "buffer B" onto the same underlying receiver for these "activation images". By adding markers which indicate and switch how the underlying connected system handles the image, you can get the neurological equivalent of framed data on a common bus.
 

Marvin, I don't understand why you are going back to an exchange I had with Fast in 2018, nearly four years ago, and responding point by point to my responses to him as if my comments were addressed to something that you said. You did not include the text that I was responding to, but to understand what I was saying, one needs to take that context into account.
So, to be fair here, maybe he's trying to engage folks in his topic of interest who have had past interest in the hopes of getting people in here who aren't DBT and FDI and Steve (and who aren't me, necessarily, seeing as I have a way with being complicated.)

I don't really blame him all that much, myself?

It's a fun topic (at least for me?) and discussing it has forced me myself to think of neural (and transistor) structures clearly in ways I had only been expecting vaguely might be possible previously.

The other day, the discussion on attention and how we "pay attention" as far as "the other free will thread, specifically on Compatibilism" made me really think about it on a mechanical level how neurons can be arranged to accomplish those functions.

That and a fun HTM NAND primitive.

At this point, I feel that I would only be repeating things I have said in the past. I have already pretty much endorsed Marvin's side of the argument, and our minor differences don't seem worth making a big deal over. Philosophers debate the question of whether there could be such a creature as a  philosophical zombie, i.e. a being that is physically identical to a human being but lacks free will, qualia, or sentience. Scott Adams, the creator of the Dilbert comic strip, calls them "moist robots". My view is that all robots, moist or not, are capable in principle of having what we call free will, qualia, and sentience. The material out of which one builds such robots does not matter. There is nothing magical about flesh and blood such that machines equipped with sensors and actuators could not be given the same properties. It's just that we are a very long way from building such machines and may never actually get there. Brains are essentially analog computing devices that operate without software. Everything is hardwired.
I do think there are things as philosophical zombies, or at least things adjacent to this concept.

I would hazard to think that some people operate, more or less, as a particular kind of machine: a grand lookup table.

Lookup tables are known for two things: one, in that they are fast; two, in that they are dumb and very structurally expensive.

The memory used in number of switches that would represent the table itself plus the cogitation on the table is much larger and so costs a much more expensive resource to support.

This creates people whose behavior is incomprehensible to them because there are more just-so rules and fewer general rules.

It is speed at the cost of intelligence, flexibility, and creativity, and particularly creativity as creativity derives from having general rules that can be logically permuted and reassembled in useful ways*.

The lookup-tabler will fail at every new hurdle and the generalizer might well see it coming and never once trip even though it is new. As I've gotten older I've started to identify that people exist between these two paradigms on a continuum.

*Plus the madness and chaos of the mind that originates such ways!
 
From the compatibilism thread it seems like there is an infered mind-body duaity.

Mind not based in biology bu an independent entity cppable of free will and free choice.

Unconditioned by experinces bired into our brain as concious and subconscious memory.
 
From the compatibilism thread it seems like there is an infered mind-body duaity.

Mind not based in biology bu an independent entity cppable of free will and free choice.

Unconditioned by experinces bired into our brain as concious and subconscious memory.
No no. There is no duality. When my brain is making a decision, I am making a decision. It is just one thing, not two.

The evolved brain is capable of imagination, evaluation, and choosing. The brain accomplishes these operations using many different functional areas. Conscious awareness is just one of those functional areas.

Free will is quite simple. It is when a person decides for themselves what they will do, like when they choose to order a dinner from the restaurant menu. We assume this is a deterministic process. We also assume that it is our own brain that is performing this process.
 
From the compatibilism thread it seems like there is an infered mind-body duaity.

Mind not based in biology bu an independent entity cppable of free will and free choice.

Unconditioned by experinces bired into our brain as concious and subconscious memory.

The compatibilist position is explicitly ANTI-duality.
 
Then the debate should be how what we are exposed to and experince from the start affetcs how we make a decison.

The argument is free will exist, no it doe not. Yes it does, no it doesn't.....

In the end it comes down to the brain.
 
Then the debate should be how what we are exposed to and experience from the start affects how we make a decison.

The argument is free will exist, no it does not. Yes it does, no it doesn't.....

In the end it comes down to the brain.

Do you recall the last time you were in a restaurant? Did you choose your meal of your own free will, or did someone else choose it for you?
 
Then the debate should be how what we are exposed to and experience from the start affects how we make a decison.

The argument is free will exist, no it does not. Yes it does, no it doesn't.....

In the end it comes down to the brain.

Do you recall the last time you were in a restaurant? Did you choose your meal of your own free will, or did someone else choose it for you?
I think it comes down to a desire to feel deep, even if they haven't even managed to dent in with a finger nail.
 
Back
Top Bottom