• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Do

Lightwave is the information upon which the brain bases it's representation of colour. This is not an arbitrary representation. It is a representation that is tested on a daily basis....mistaking the colours of traffic signals at a busy intersection can get you into a lot of trouble.


I had a TERRIBLE dream about 4 years ago. It was one of those dreams where I knew I was dreaming. Some guy tied me to a stake and said that he is going to start torturing me. DBT if I have any integrity from your point of view I am telling you the honest truth; I told the guy that I am dreaming and that I would not feel the pain that he says he was going to inflict on me. But he began torturing me anyways, and I remember feeling such pain and thinking that this doesn't make sense, yet it hurt really bad the way conscious pain does. I woke up, and just couldn't believe it as I reflected on what had just happened.

The point is that inputs leading to experiences are not unique to one type.

Nobody has claimed that inputs are all identical. Inputs come in a huge variety of types and shades. Dreams are a symbolic rearrangement of past events, memory projected into current fears and desires....sometimes understood by the conscious mind, sometimes not. More often forgotten even as they occur.

Okay, there is a miscommunication of my whole post. What I meant is that the brain does not necessarily require a particular input to give a particular output of experience. For example, I may experience red (see red) without the input of the usual wavelength. I might hit my head, dream it, have an illusion of it, etc.

We experience a mentally constructed virtual world based on sensory inputs of objective information, wavelength, airborne molecules, etc, which also entails memory. Memory being woven into the 'fabric' of experience/consciousness enabling recognition, comprehension, etc.

Dreams, on the other hand are constructed purely on the basis of memory - as I said earlier - therefore dream imagery/sensation of people, events, colours, etc, are memory arrangements of these things rather than both sensory and memory integration while the brain is awake.

Which is why the laws of physics do not apply to dreams. In dreams any impossible thing may be experienced.....you can fly through the sky, be chased by monsters that don't exists, and so on.

And, I am not saying that you think that the brain to mental input output is one-to-one as I look back at the posts. I can't remember why I was brining this up in the first place. I don't think we are actually in any kind of a disagreement, for once.

No problem.
 
Okay, there is a miscommunication of my whole post. What I meant is that the brain does not necessarily require a particular input to give a particular output of experience. For example, I may experience red (see red) without the input of the usual wavelength. I might hit my head, dream it, have an illusion of it, etc.

We experience a mentally constructed virtual world based on sensory inputs of objective information, wavelength, airborne molecules, etc, which also entails memory. Memory being woven into the 'fabric' of experience/consciousness enabling recognition, comprehension, etc.

Dreams, on the other hand are constructed purely on the basis of memory - as I said earlier - therefore dream imagery/sensation of people, events, colours, etc, are memory arrangements of these things rather than both sensory and memory integration while the brain is awake.
But I cannot feel physical pain from a memory the way I felt physical pain in that dream. I am totally shocked to this day by that dream.
 
We experience a mentally constructed virtual world based on sensory inputs of objective information, wavelength, airborne molecules, etc, which also entails memory. Memory being woven into the 'fabric' of experience/consciousness enabling recognition, comprehension, etc.

Dreams, on the other hand are constructed purely on the basis of memory - as I said earlier - therefore dream imagery/sensation of people, events, colours, etc, are memory arrangements of these things rather than both sensory and memory integration while the brain is awake.
But I cannot feel physical pain from a memory the way I felt physical pain in that dream. I am totally shocked to this day by that dream.

The sensation of pain varies even while in a conscious state, distractions, nerve damage, signal strength, etc, so it's not surprising the sensation of pain within various dream states is different. Sometimes actual pain, nerve signals from a wound or whatever, are integrated into a dream, weaving a story around the sensation of pain...which may not feel the same as in a conscious state even though the source of pain is an actual wound.
 
No. You dont act on your intent. That would be an infinite recursion.
Your brain acts. That act is you and your intentions.

But aren't we our brains? Isn't that where our 'software' exists? With everything else being semi-autonomous life support?
 
No. You dont act on your intent. That would be an infinite recursion.
Your brain acts. That act is you and your intentions.

But aren't we our brains? Isn't that where our 'software' exists? With everything else being semi-autonomous life support?
I don't hold the view that we are our brains. I am me. I am not my brain. My brain doesn't drive to the store. I drive to the store. I'm taking her to dinner. I'm not taking her brain to dinner. Doesn't it even sound rather weird to say such silly things as we are our brains?
 
We are a body of information stored in a brain,activated when needed (put to sleep when not) as an interface, an avatar, with the world, to respond to its objects and events, a bundle of memories, a complex set instructions that define who we are and how we feel, perceive and respond to an event/prompt/urge/desire/aversion at any given moment......all of which unravels in the presence of memory function breakdown, until nothing is left when memory function no longer works.
 
Human society and thinking is based on the unspoken but believed idea that humans have full agency and can make decisions freely.

Yes. And our spoken/written language is saturated with this idea, which is probably an illusion.

Ditto for our sense of self (itself arguably an illusion) which I think lies near the core of the free will illusion. After all, free will gives the supposed self something to do, supposedly. Who wants a redundant self that can't 'do' anything?
 
Last edited:
Human society and thinking is based on the unspoken but believed idea that humans have full agency and can make decisions freely.

Yes. And our spoken/written language is saturated with this idea, which is probably an illusion.

This idea that agency is an "illusion" poses a dilemma of sorts. It implies that there is some form of agency that would be real and not illusory, does it not? This is the problem that I have with eliminativist arguments. It is one thing to reduce a phenomenon or concept to its constituent parts. It is quite another to claim that the whole somehow does not exist, because it can be fully explained in terms of its components.
 
Human society and thinking is based on the unspoken but believed idea that humans have full agency and can make decisions freely.

Yes. And our spoken/written language is saturated with this idea, which is probably an illusion.

This idea that agency is an "illusion" poses a dilemma of sorts. It implies that there is some form of agency that would be real and not illusory, does it not? This is the problem that I have with eliminativist arguments. It is one thing to reduce a phenomenon or concept to its constituent parts. It is quite another to claim that the whole somehow does not exist, because it can be fully explained in terms of its components.

Hm. There is a some form of agency that is real. Your computer has it.

ps what are you doing here? Don't you know I came here to get away from people like you? :)
 
Yes. And our spoken/written language is saturated with this idea, which is probably an illusion.

This idea that agency is an "illusion" poses a dilemma of sorts. It implies that there is some form of agency that would be real and not illusory, does it not? This is the problem that I have with eliminativist arguments. It is one thing to reduce a phenomenon or concept to its constituent parts. It is quite another to claim that the whole somehow does not exist, because it can be fully explained in terms of its components.

Hm. There is a some form of agency that is real. Your computer has it.

ps what are you doing here? Don't you know I came here to get away from people like you? :)

I'm stalking you, of course. I can't let you get away with whatever you're trying to get away with. :)

Your response is problematic, of course, because computer agency, to the extent that we try to model it, is based on animal agency. That is, we analyze the kinds of actions that humans go through in making decisions, and we try to mimic that in computers. After all, brains are just control and guidance machines, just a lot more sophisticated than computer systems. So why would you think that human agency is an illusion, but computer agency is not? If anything, the reverse is true, because computers perform worse on so many tasks that humans are good at, e.g. object recognition.
 
This idea that agency is an "illusion" poses a dilemma of sorts. It implies that there is some form of agency that would be real and not illusory, does it not? This is the problem that I have with eliminativist arguments. It is one thing to reduce a phenomenon or concept to its constituent parts. It is quite another to claim that the whole somehow does not exist, because it can be fully explained in terms of its components.

Hm. There is a some form of agency that is real. Your computer has it.

ps what are you doing here? Don't you know I came here to get away from people like you? :)

I'm stalking you, of course. I can't let you get away with whatever you're trying to get away with. :)

Your response is problematic, of course, because computer agency, to the extent that we try to model it, is based on animal agency. That is, we analyze the kinds of actions that humans go through in making decisions, and we try to mimic that in computers. After all, brains are just control and guidance machines, just a lot more sophisticated than computer systems. So why would you think that human agency is an illusion, but computer agency is not? If anything, the reverse is true, because computers perform worse on so many tasks that humans are good at, e.g. object recognition.

I don't think human agency is an illusion. I think free will is an illusion. I was responding to "full agency and can make decisions freely."

Of course, it depends on what is meant by that, but I took it to be the sort(s) of free will that we mostly believe we have. With the caveat that this may vary from person to person (and even culture to culture) but as far as can be told is a version of Libertarian free will or very closely related to it. It's also the version that I persistently experience, despite intellectually taking it to be an illusion.

There are other so-called versions of free will such as Dennett's, and imo these seem much closer to describing our capacities. Understanding of these versions circulate among philosophers and those interested in philosophy. Personally, I would not, like him, use the term free will for them, because it can easily be confused with the folk psychology illusion version (just as I wouldn't call nature 'god' for similar reasons) but at the end of the day that's just a labelling issue. I think Dennett describes 'what it is we have' very well.
 
Last edited:
The kind of free will people really have is very easy to spot as it is exemplified straightforwardly in what humans do and contrasted with what things like other animals and machine do (not just "can do" but actually do).

It's just a very obvious fact that no computer possess anywhere near the same amount of free will as humans do.

I can only guess that the sterile debate that's been going on about free will for eons is just the result of ideological obduracy on all sides.

Of course, there's nothing I can do about that. People are free to prefer ideological obduracy to a wee bit of rationality.
EB
 
The kind of free will people really have is very easy to spot as it is exemplified straightforwardly in what humans do and contrasted with what things like other animals and machine do (not just "can do" but actually do).

It's just a very obvious fact that no computer possess anywhere near the same amount of free will as humans do.

I can only guess that the sterile debate that's been going on about free will for eons is just the result of ideological obduracy on all sides.

Of course, there's nothing I can do about that. People are free to prefer ideological obduracy to a wee bit of rationality.
EB

I agree.

I think when certain people (eg Sam Harris and Jerry Coyne) say that we don't have free will, they are probably correct, but they are only talking about the sort of illusory free will that seems to be commonly believed in. Magic or ultimate free will, if you like. Also, I think they are probably correct when they outline the issues this lack throws up for morality and justice.

On the other hand, I think they over-simplify. Whilst we probably do not have free will in that sense, we do have certain sophisticated capacities for agency (using the term 'we' generally, because the capacities will vary from person to person, as well as between species, as well as between biological organisms and non-biological machines).

As such, I'm in favour of debunking popular conceptions of free will, but not of throwing the baby out with the bathwater.

Most recently I came across this article:
https://www.sheffield.ac.uk/polopoly_fs/1.101516!/file/smilansky-on-shallowness.pdf

In which the writers argues that the most reasonable and accurate position is to admit that both Compatibilism and Hard Determinism are partly right:

"The hard determinist is right to say that any punishment is in some sense unjust, but wrong when she denies that some punishments are more unjust than others because of the issue of compatibilist control........it is unconvincing to deny the difference between the control possessed by the common thief and that of the kleptomaniac....

Similarly, once we grant the compatibilist that his distinctions have some foundation and are partially morally required, there is no further reason to go the whole way with him, to claim that the absence of libertarian free will is of no great moral significance, and to deny the fact that without libertarian free will even a vicious and compatibilistically-free criminal who is being punished is in some important sense a victim of his circumstances."





As you imply, being only partly right is not usually the way we clever apes like to see our positions, and yet they almost always are that. Things are almost always a question of degree, which is why I prefer to think in terms of 'degrees of freedom' when it comes to agency and intentions.
 
Last edited:
It's just a very obvious fact that no computer possess anywhere near the same amount of free will as humans do.
Since very few computer posess anywhere near the same behavior as humans that may be true. But I fail to see significance. There are nothing free about it.
 
I don't want to derail this thread to debate on free will here.

So, instead, if you're interested, please visit https://talkfreethought.org/showthr...ual-free-will-humans-have&p=478795#post478795

EB

It seems to me that the thread has shifted away from the OP into areas like consciousness and free will, which are popular topics across a great many threads. Ruby's point that there are various concepts of "free will" at issue is correct, of course, but I think that the main problem with these debates is they often become implicit struggles over which concept we are talking about. He knows that I am a compatibilist, but he doesn't really want to go with that concept of what it really means. In my case, I see it as basically doing what one wants, other things being equal, although one doesn't really get to choose what one wants. The process of making choices and decisions is a fully determined process, but the "freedom" part is more about freedom to bring about a predetermined outcome.

I was going to say something about agency and "do" here, but I think that the conversation has gone elsewhere.
 
...the "freedom" part is more about freedom to bring about a predetermined outcome.

Ah. If you'd made clear you were an Inchoherentist, I would have understood where you were coming from. :)

I was going to say something about agency and "do" here, but I think that the conversation has gone elsewhere.

Someone else has started a thread on free will, so you could make your point. I agree we don't (necessarily) need to get into free will here, although I think the agency considerations make the issue of free will hard to avoid by implication. But I agree we don't have to converge exclusively on it, especially now there's another thread dedicated to it.
 
Last edited:
The interesting thing about agency is that we are "hardwired" to make a basic distinction between animate agency, which involves intention or will, and inanimate agency, which does not. The distinction is so deeply ingrained in us that every language in the world builds it into the grammatical structure of language in ways that only linguists are consciously aware of. That is, simple clauses in every language must make the following type of distinction.

Suppose that Billy strikes a ball with a bat, and the ball subsequently hits and breaks a window. You can describe the event as:

Billy broke the window with a baseball.
Billy used a bat to break the window with a baseball.

But not:

The bat broke the window with a baseball.

That is, nonvolitional causes cannot command instruments grammatically. This is a universal trait of all languages.
 
The interesting thing about agency is that we are "hardwired" to make a basic distinction between animate agency, which involves intention or will, and inanimate agency, which does not. The distinction is so deeply ingrained in us that every language in the world builds it into the grammatical structure of language in ways that only linguists are consciously aware of. That is, simple clauses in every language must make the following type of distinction.

Suppose that Billy strikes a ball with a bat, and the ball subsequently hits and breaks a window. You can describe the event as:

Billy broke the window with a baseball.
Billy used a bat to break the window with a baseball.

But not:

The bat broke the window with a baseball.

That is, nonvolitional causes cannot command instruments grammatically. This is a universal trait of all languages.
That is more or less what I meant when I opined that a lot of this stuff saturates our language (even when it just comes to agency or supposed agency, setting aside free will).

But you explain it so much better than me. :)

What about though....when we mighty say the weather did something, such as in the wind blew me over?

So...we might not in certain circumstance attribute agency to something which doesn't have it (eg a bat) but...sometimes we do. In other words, although I agree we are 'hardwired' to make the distinction, we often misapply it, most often in the direction of misattributing agency rather than misattributing the lack of it.
 
The interesting thing about agency is that we are "hardwired" to make a basic distinction between animate agency, which involves intention or will, and inanimate agency, which does not. The distinction is so deeply ingrained in us that every language in the world builds it into the grammatical structure of language in ways that only linguists are consciously aware of. That is, simple clauses in every language must make the following type of distinction.

Suppose that Billy strikes a ball with a bat, and the ball subsequently hits and breaks a window. You can describe the event as:

Billy broke the window with a baseball.
Billy used a bat to break the window with a baseball.

But not:

The bat broke the window with a baseball.

That is, nonvolitional causes cannot command instruments grammatically. This is a universal trait of all languages.
That is more or less what I meant when I opined that a lot of this stuff saturates our language (even when it just comes to agency or supposed agency, setting aside free will).

But you explain it so much better than me. :)

What about though....when we mighty say the weather did something, such as in the wind blew me over?

So...we might not in certain circumstance attribute agency to something which doesn't have it (eg a bat) but...sometimes we do. In other words, although I agree we are 'hardwired' to make the distinction, we often misapply it, most often in the direction of misattributing agency rather than misattributing the lack of it.

Good question, because humans have attributed volition to wind gods forever. However, one cannot say "The wind broke the window with a rock" except in a metaphorical or occult sense, where one attributes volition to the wind. So only volitional agents can play the semantic/grammatical role of "Agent". This is something that the great Indian linguist Panini (or his predecessors) discovered centuries ago, and it is also a fact that informs modern linguistic theories of grammar.
 
Back
Top Bottom