• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Great Contradiction

ruby sparks said:
It might be possible to say that there is no contradiction between saying 'there is full determinism' and what you are calling free will, but that would only be because what you are calling free will is not in fact free will at all.
First, you have not produced any good reason to even suspect that. In fact, I'm talking about writing of my own accord/free will, in the ordinary sense of the expressions.

Second, you claimed in some posts that there is a contradiction, though you did not present a valid argument deriving a contradiction despite repeated requests. On other posts, you claimed it was a matter of empirical evidence. So, even if I were completely wrong about determinism, the meaning of the words, or whatever, it would remain the case that you are being inconsistent.

ruby sparks said:
You are only counting the obvious determinants and omitting the literally vast number of non-obvious ones. It is in some ways analogous to someone saying that an iceberg is a big lump of ice that sticks up above the surface of the sea. So using that definition there then would be no contradiction between saying, 'there are icebergs' and, 'all of an iceberg sticks up above the surface of the sea'. But that would be incorrect because that is not what an iceberg actually is.
First, again, you have not produced a valid argument that starts with the premises that the universe is deterministic + I write this of my own accord (plus whatever other premise you would want to add) and has a contradiction as the conclusion.

Second, even if you were correct about that (you aren't, but even if), your position would still be inconsistent. The reason is that just as you keep claiming that there is a contradiction between determinism and free will, you then go on to talk about...neuroscience. You just keep doing that. Unless you're suggesting that neuroscience provides evidence for causal determinsm (of course it does not, but if that's what you're saying, please say so), then that is inconsistent on your part.

In other words, either it is a contradiction, or it is an empirical matter. If you believe that it is a contradiction, I would ask you to produce a valid argument hat starts with the premises that the universe is deterministic + I write this of my own accord (plus whatever other premise you would want to add) and has a contradiction as the conclusion. If you believe it is a matter of empirical evidence, I would ask you to please say so, and say clearly that there is no contradiction.

ruby sparks said:
Regarding what people mean when they say they have free will, I can't stop them from saying that any more than I can stop someone from using the word iceberg to describe what is actually only the tip of the iceberg.
You could of course provide a valid argument.
 
There in lies the rub. You think everything you do is absolutely of your own accord. That view ignores the deep subconscious conditioning and programming we are all subject to from birth.

Word association. If you grew up in the USA and I said car company you would likely say Ford or Chevy. Say pickup truck and it would be Ford.

If I said cola you'd probably say Coke or Pepsi.

I do not think free will exists. Choices are always conditioned.

You go to buy a new car. A number of choices. You can make a free choice in that it is uncoerced. However your choice is biased by advertising and cultural norms.
 
There in lies the rub. You think everything you do is absolutely of your own accord. That view ignores the deep subconscious conditioning and programming we are all subject to from birth.

Word association. If you grew up in the USA and I said car company you would likely say Ford or Chevy. Say pickup truck and it would be Ford.

If I said cola you'd probably say Coke or Pepsi.

I do not think free will exists. Choices are always conditioned.

You go to buy a new car. A number of choices. You can make a free choice in that it is uncoerced. However your choice is biased by advertising and cultural norms.

I don't ignore those things. I reckon they have nothing to do with whether I act of my own accord. I choose to say: Dongfeng. That is something I chose to say of my own free will. The first name I thought of was 'Tesla', very probably because I had just read an article about Tesla. The company I first think of might depend on things like that. But that's not the point. The first thing that pops into my head is not something I choose of my own accord. What I say here - or rather write - is. It does not depend on where I grew up (I just looked this up on the internet; I never heard of it before).

When you said 'cola', I actually did not think of any brands before I read you said Coke or Pepsi. But that is beside the point. I choose of my own free will to say Ubuntu Cola - a name I found googling randomly. I had never heard of it before. Word association does not prevent me from saying stuff of my own free will.
 
I've been through this with him before. I don't find his argument (that rejecting retributive punishment necessarily means one rejects the notion of not hurting the innocent because they don't deserve it) at all persuasive and I have no desire to rehash it.
Please yourself, of course; but you misstated my argument. I didn't argue that rejecting retributive punishment necessarily means one rejects the notion of not hurting the innocent because they don't deserve it*. Quite the reverse -- I specifically asked you whether you also reject the notion of not hurting the innocent because they don't deserve it, and, in the event that you don't reject it, invited you to explain why you accept that notion.

It is of course very common for people to reject retributive punishment but not reject the notion of not hurting the innocent because they don't deserve it. As far as I can tell, the usual psychological reason for this is that when there's a conflict between a person's intellectual moral theory and his intuitive moral emotions a typical person will simply believe both are correct and make practical judgments emotionally. He will use his theory for explaining his judgments when it supports them; but when it implies they're wrong he will either simply not think about his theory's implications in that case or else he'll come up with some far-fetched rationalization for believing his theory actually supports them. This is a general pattern in human moral ethology; it occurs in the case of retributive punishment because it occurs pretty much everywhere.

(* I said "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." Your attempted paraphrase, "rejecting retributive punishment", is not synonymous with "rejecting the retributive notion of moral desert".)
 
At any rate, the point is that if you are correct about there not being a universal human moral sense, either there is no fact of the matter as to whether that discrimination is unethical, or there is but you do not know it, or it is not unethical as nothing is.

Ethics doesn't come in to the frame of whither universal moral sense. If there is moral sense discrimination doesn't exist. It is irrelevant to consider whether it would even be hypothetically unethical. There is not a universal moral sense so moral discrimination and ethics of discrimination are considered within each social unit's ethical coda.

What I sense in your arguments is a lack of scope being applied to each moral judgement. If common conditions exist within tribes I'd expect some similarity in many moral statements. For instance if a functioning culture is multilingual and muliti-colored I'd expect movement toward some agreement in racial and lingual constraints and I'd expect to see movement toward such status as one or the other language becomes more likely spoken or persons of color be seen within the culture.

I'd expect a racist judgement if one or another group within the culture took action to limit growth or commonality of emerging racial and ethnic trends.

This is the sort of thing humans are confronting over the past thousands of years since commerce and integration among tribes has been taking place. On the other hand genetic changes are less rapidly changing. So from this perspective I expect difference making decisions would be the norm. We are seeing both play out. It doesn't seem right that moral, cultural guides, should be bonded or compared with logical intellectual constructions.

Yes there is some overlap, but that overlap isn't the driving force behind evolution. Out intellect has become so potent that a moral misstep can bring an end to mankind. For that reason I'd concentrate on reducing the power and number of constraints people put on others. I'd reduce the power of personal judgement to produce reactions. Some would say put more estrogen and sugar in society.

Overall I think we are doing pretty well by cooling commerce and hatred with joint responsibility and cooperation. Fanning distrust really doesn't make us better.

FDI
 
Please yourself, of course; but you misstated my argument.

(* I said "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." Your attempted paraphrase, "rejecting retributive punishment", is not synonymous with "rejecting the retributive notion of moral desert".)
Yes, that was quite a blunder. Apologies for the misrepresentation.
 
fromderinside said:
Ethics doesn't come in to the frame of whither universal moral sense. If there is moral sense discrimination doesn't exist. It is irrelevant to consider whether it would even be hypothetically unethical. There is not a universal moral sense so moral discrimination and ethics of discrimination are considered within each social unit's ethical coda.
That does not follow. Why would discrimination not exist if there were a universal human moral sense? Because it's morally wrong?
That does not follow. Let us specify some hypotheses:

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Of course, here the hypotheses are to be understood in the context of usual talk universal human traits, which does not mean all humans have them. For example, humans have two legs. But that does not mean that all humans have two legs. It's a species-wide trait, but it can failed to be present due to illness, accident, and the like.

Now, P1 holds. That is an empirical observation. Other apes and monkeys have similar traits. Obviously, the existence of P1 does not imply lack of discrimination.
P3 holds. The issue might be P2. I would say it does exist, perhaps with some caveats in the form of some tolerance - there is some vagueness I think in nearly all or all terms used to talk about the world around us, including moral ones and those involving proper function.

But none of this things precludes the existence of immoral behavior, of course. Moral assessments might be false and lead to unethical behavior. Or they might be correct, but moral motivation is not as strong as some other motivation in the case of a particular human at a particular time. And so on.

Moreover, if P2 does not hold, then that would not preclude discrimination, either, nor does it seem to make it less probable.


fromderinside said:
Yes there is some overlap, but that overlap isn't the driving force behind evolution. Out intellect has become so potent that a moral misstep can bring an end to mankind. For that reason I'd concentrate on reducing the power and number of constraints people put on others. I'd reduce the power of personal judgement to produce reactions. Some would say put more estrogen and sugar in society.
Regardless of what you'd do, here, as in other posts, you accept that there is such thing as a moral mistake. But if P2 does not hold and there is such thing a a moral mistake, it is relative to the moral sense of an individual human, or that of a group of humans.
 
ruby sparks said:
So can I first ask you what sorts of things you would consider universal human traits, perhaps especially traits that you feel can be set beside morality for comparison?
Having two arms, two legs, 10 fingers, opposable thumbs, two eyes with certain properties, one liver with certain properties, and so on. That includes a large number of psychological dispositions. Universal traits are something you see in every species. For example, if you take a look at a documentary on, say, African lions, you will get informations about some properties of African lions. Those properties - including mental properties, as behavior is described of course - are shared by all lions, or rather, by all African lions save for illness/malfunction/defect, etc. There are universal African lion traits. And universal dog traits, and so on. The same applies to humans.

ruby sparks said:
For example, if we say that having language is a universal human trait, that seems true (I'm going to assume exceptions are allowed in such matters) but there are lots of different languages.

It is a good candidate, though I don't know how innate it is; it's a matter for further research, but one thing is clear: the capability for language is a universal human trait. Sure, there are different languages, though with some things in common, which make them suitable for humans.

A usual analogy in philosophy is human color vision (despite some slight differences, it is at least nearly universal).

ruby sparks said:
Also, what would be your definition of "a universal (sense of) morality"?
Definitions are always difficult, but I would say in this context, there would be a universal human moral sense if P1 and P2 hold. But I'm not sure it's convenient to include P2 in the definition for the general purpose of studying the matter, as it would rule out any system that results in some form of mild relativism, and one might want to consider them, depending on context.

ruby sparks said:
I should clarify that at this time I have no strong views on whether there is a universal morality or not, or even a universal sense of morality. I think it would be fair to say that I tend to think not, but I'm fairly sure I have not thought about it as deeply or looked into it as fully as you. I have discussed the subject before, quite a while ago, but not extensively, and I don't think I remember everything about those discussions.
I'm not the most qualified person in this thread to discuss the matter, either. But in any case, that humans do have a system that makes moral judgments is an empirical observation, and there are similar systems in other apes and monkeys (P1). If P2 did not obtain, then weird things would, like mistranslations of moral terms between two different languages, or even between two communities using apparently the same language, or other weird things. Since those are extraordinary claims, the burden would be on the claimants (long ago, I used to think there was a good case from (apparent) disagreement to miscommunication, but now I think the case is weak).
 
There in lies the rub. You think everything you do is absolutely of your own accord. That view ignores the deep subconscious conditioning and programming we are all subject to from birth.

And that is only one of a number of sets of determinants it ignores.

Although just to note, a machine that has been progammed a certain way and switched on is arguably, thereafter, doing things 'of it's own accord', so the term 'of it's own accord' may be said to be correct. As such, the phrase 'of its own acord' seems not precise enough. The machine can be agental, it can learn, and make choices. To say that it is freely choosing to do anything of its own accord is arguably another matter.
 
Last edited:
If P2 did not obtain, then weird things would, like mistranslations of moral terms between two different languages, or even between two communities using apparently the same language, or other weird things. Since those are extraordinary claims, the burden would be on the claimants (long ago, I used to think there was a good case from (apparent) disagreement to miscommunication, but now I think the case is weak).

I'm not sure what you're saying there.



To go back:


P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.


As you say, P1 & P3 seem fine.

I think I'd be happy to say that P2 seems fine also, as a hypothetical (possibly with some small caveats about what 'proper functioning' means) but I'm still confused as to what that implies. It seems to imply that a particular morality is essentially subjective, not universal. All that would be universal (with allowable exceptions) is that humans have some subjective sense of morality, which varies from human to human.

It seems to me that this is because no two humans are entirely the same.

In other words, it seems to me that P2 should be amended to:

P2: If two identical humans have properly functioning moral senses, they will yield the same outputs given the same inputs.

Then the reason that there is no particular universal morality (beyond the universaliity that there is some morality) would be because now, P2 does not actually pertain in the world. Or, that if it did pertain (I should not perhaps entirely rule it out) then that would only be in the exceptional case of those two humans, not humans in general. In general, we could say that it is universally true (again with allowable exceptions) that morality varies in humans.
 
Last edited:
Second, I explained that those studies are in fact not relevant, because the problem is that incompatibilists have a mistaken theory about the meaning of the words. This isn't about neuroscience. It's about the meaning of expressions such as 'of one's own free will'.

As I have already said, if you want to effectively call the tip of an iceberg an iceberg, I can't stop you. It neatly avoids there being a contradiction. And yes, neuroscience would have no bearing on the matter, except in the sense of possibly being able to help explain why you would want to do that. :)
 
When you said 'cola', I actually did not think of any brands before I read you said Coke or Pepsi. But that is beside the point. I choose of my own free will to say Ubuntu Cola - a name I found googling randomly. I had never heard of it before. Word association does not prevent me from saying stuff of my own free will.

I don't think you are exploring the matter thoroughly and taking all the determinants into account during the sequence of events.

For example, and just for starters, why did you even try to think of anything at all when he said 'cola'? Why did you even just do that at all? Temporarily set aside, for the moment, that you believe that at some point after that, you did something of your own free will. Just focus on what happened when he said (or to be precise you read) the word 'cola'.



Tangentally (and you personally may not be interested in this given that you eschew neuroscience here, but others may be interested) it's possible, I believe, that you may have registered the word 'cola' non-consciously before it entered your consciousness. To me, one of the fascinating things about what neuroscience (and bearing in mind that neuroscience is possibly still only in its infancy) can bring to this issue is the ability to scientifically measure stuff happening at very very short timescales that we can't otherwise appreciate or discern. For example, it takes, say, a millisecond for a single electrochemical impulse to get across from one part of your brain to another. Experiments suggest that you can non-consciously register (and respond to) an image of a face, for example, in a time about 50 times longer than that (50 milliseconds) because, it seems, of trillions of impulses crisscrossing your brain during that longer time. But apparently it takes 10 times that long (500 milliseconds, ie the trillions of impulses have been travelling back and forth for 500 times the length of time it takes for each journey) for the recognition to become consciously experienced. It appears that your 'conscious now' is actually, always and inevitably what happened in the past, albeit a very short time ago. This makes sense, because it involves processing and that takes time.
 
Last edited:
Oh what the heck, I'll try some logic. It might be fun. It's not my thing and I don't think it can sort this issue out.

P1. An X is a type of Y
P2. Something is fully X
P3. Therefore that something is fully Y

For example:

Blue is a type of colour
An object is fully blue
Therefore the object is fully coloured

A sheep is a type of mammal
An entity is fully a sheep
Therefore the entity is fully a mammal

A prior determination is a type of constraint regarding making free will choices
An entity is fully prior determined
Therefore that entity is fully constrained regarding making free will choices





As I said, I think that formal logic is going to be of limited use here, just as it would be for, say, deciding if we really are evolved from apes for example. But it seems we have a fan of formal logic rather than science here.
 
Last edited:
That does not follow. Why would discrimination not exist if there were a universal human moral sense? Because it's morally wrong?
That does not follow. Let us specify some hypotheses:

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Of course, here the hypotheses are to be understood in the context of usual talk universal human traits, which does not mean all humans have them. For example, humans have two legs. But that does not mean that all humans have two legs. It's a species-wide trait, but it can failed to be present due to illness, accident, and the like.

Now, P1 holds. That is an empirical observation. Other apes and monkeys have similar traits. Obviously, the existence of P1 does not imply lack of discrimination.
P3 holds. The issue might be P2. I would say it does exist, perhaps with some caveats in the form of some tolerance - there is some vagueness I think in nearly all or all terms used to talk about the world around us, including moral ones and those involving proper function.

But none of this things precludes the existence of immoral behavior, of course. Moral assessments might be false and lead to unethical behavior. Or they might be correct, but moral motivation is not as strong as some other motivation in the case of a particular human at a particular time. And so on.

Moreover, if P2 does not hold, then that would not preclude discrimination, either, nor does it seem to make it less probable.



Regardless of what you'd do, here, as in other posts, you accept that there is such thing as a moral mistake. But if P2 does not hold and there is such thing a a moral mistake, it is relative to the moral sense of an individual human, or that of a group of humans.

I thought I made it clear that morality and ethics come from two distinct and different systems. One is moral on some scale. If morality is a sense there are tugs and pushes on behavior. It comes with being a human. Ethics is a human created system of rules. One applies ethics on some scale consciously. I'm petty sure being and doing are different things. You'd need one argument P for morals and another argument Q for ethics.

Since we are talking sense one sees that seeing is not doing yet one senses some pull if there is a moral sense while one considers the group's rules when acting as a member of a social group. It's a bit like oil and water they don't mix without work.

Constructing a joint scale using the two systems takes a bit of work. One example is  theory of conjoint measurement
.


Yeah that's the ticket
 
The reasons for punishment: rehabilitation, isolation, deterrence, and vengeance. That last one, vengeance, isn't a legitimate function of government.

If you kill your rapist because you are angry, that may be acceptable, but government has no business hurting people based on irrational emotions.
...
We have an advocate of retribution in this thread. It makes no sense to me.

Suppose Sara accidentally kills Joe's daughter in a car wreck. And now suppose Joe wants to kill Sara's daughter retributively, because it's "fitting," because it "fits the crime," because it's "just."

... He's acting only in the belief that retribution is somehow good.

Does Joe have any rational motive at all? I'd say no. Joe isn't hoping to accomplish any good thing. No deterrence, no isolation, no rehabilitation, nor any other benefit. He just thinks symmetry is fitting and proper. He thinks retribution is good for nothing.

That is, he thinks retribution is good in spite of the fact that it has no benefits.

Joe wants to do a great harm, but he has no offsetting benefit as a justifying goal.

I'm not on Joe's side. I think he's irrational. I do not favor retribution.
Dude, that argument is fractally wrong -- wrong at every level you look at it.

1. To defeat an idea you have to refute the best case for it, not the worst case for it.

Granted.



You've presented the worst imaginable scenario for a retributive punishment. It was an accident. Retribution is for deliberate wrongdoing. And he wants to kill the perpetrator's daughter, not the perpetrator. Retribution is for perpetrators. Nobody you're arguing with is in favor of taking out our retributive urges on the accident prone or on proxies. What do you take us for, Christians, who think we all deserve Hell for Adam having eaten an apple? Sara's daughter didn't kill Joe's daughter; therefore Joe killing Sara's daughter is a first-strike. It's not a counterattack. It's not symmetry. I.e., it's not retribution.

2. Surely you knew that -- when have you ever heard anyone advocate retribution against innocent people? So why did you construct that example?

I was trying to unmix the motives. I made it an accident so that deterrence and and rehabilitation wouldn't factor in.



Presumably, because you want your readers to think retribution is retribution, there are no relevant distinctions to be made within that category,

I don't understand the category at all. I don't know about these "relevant distinctions" of which you speak. This is the first I've heard of them.



and killing Sara's daughter for Sara accidentally killing Joe's daughter is morally on a level with killing Sara for Sara deliberately killing Joe's daughter. But if those scenarios really were no different, then what's your motivation for the switch?

I tried to construct a pure-retribution scenario. What's your motive for the accusative tone of your post?



How did you know killing the innocent would pack more emotional punch than killing the guilty, unless you share the emotion? This implies you must understand at least on a subconscious level that bogus so-called "retribution" against the innocent really isn't the same thing as actual retribution against the guilty. So there appears to be a self-contradiction baked into your argument.

I'm not sure we're going to get along.



3. You're trying to arouse an emotional reaction against retribution, and that's fine -- all moral arguments are emotional arguments -- but you're doing it by making Joe some sort of primitive Bible-writing bronze-age goat-herding bigot who thinks children are property.

You keep bringing up religion. Did I bring up religion?

You attribute this children-are-property stuff to me when you're the one bringing it up.

Retribution makes no sense to me. It seems pure villainy. But maybe you can explain. Maybe you can do a better job of separating out other motives.




Joe evidently thinks killing Sara's daughter for Sara killing his daughter is the same thing as smashing Sara's car if Sara smashed his car. That's a revolting characteristic of Joe that makes him unsympathetic, but it has nothing to do with his belief in retribution.

So let's change it to cars rather than daughters. Or let's make Sara's daughter already "deserving" of death from a separate incident.



You want to arouse emotions against retribution, arouse emotions against retribution, not against irrelevancies. There are legitimate forms of emotional argument but guilt-by-association isn't one of them.

I'm not interested in arousing emotions against it. I'm trying to tease out justifications for it. How is it to be distinguished from arbitrary cruelty?

Or, in the alternative, I'm trying to show that there are no justifications for it.
 
Retribution makes no sense to me. It seems pure villainy.

Assuming we all mean the same thing when we use the word retribution (which is probably not a safe assumption) then.......

Using prison as an example of applying retribution. (deprivation of certain freedoms essentially, nothing more).....

I think retribution in that form can still make sense in terms of (a) deterrence and (b) protection (of others).

As for rehabilitation, I'm not seeing this as a punishment, beyond 'teaching the prisoner a lesson', which it seems to me is deterrence.

Which part do you feel is villainous? The emotional urge? The gaining of a sense of satisfaction that the wrongdoer is punished? I'm not sure I'd call that villainous, necessarily.

It is, I think, linked to a belief in free will, so in that sense if there is actually no free will then we might call that component of it unreasonable, or irrational or illogical perhaps. Possibly. I'm not totally sure if any of those words is fully appropriate. What I mean is, we might not reasonably or rationally feel those things if we believed or realised that there was no free will. Our reaction would be similar to our reaction if a computer malfunctioned.

That said, if the computer went on fire and burnt down a building, and several people died in the fire, we might then look to see if we could blame the human manufacturers.

I have heard it said that philosophers and others who claim there is free will, of some sort or other, are essentially still looking for someone to blame. It struck me that there might be some truth in that. A world where nothing and nobody is to blame for anything and where everything that goes wrong is more or less an unfortunate, inescapable accident or inevitability does not appeal, perhaps.

The sense of loss might be worse than there merely being no god and therefore no objective purpose or meaning to life.
 
Last edited:
Retribution makes no sense to me. It seems pure villainy.

Assuming we all mean the same thing when we use the word retribution (which is probably not a safe assumption)

Right, which is part of why we're having this conversation.




then.......

Using prison as an example of applying retribution. (deprivation of certain freedoms essentially, nothing more).....

I think retribution in that form can still make sense in terms of (a) deterrence and (b) protection (of others).

Angra's point--assuming I understand--is that retribution is good even when it does harm without benefit.

So let's set deterrence and protection aside (as I tried to do in my Joe and Sara example) and ask if there's still reason to want to hurt people.




As for rehabilitation, I'm not seeing this as a punishment, beyond 'teaching the prisoner a lesson', which it seems to me is deterrence.

We could make the same not-a-punishment argument about the other motivations too. For myself, I'm willing to set those arguments aside and continue to use the word "punishment."




Which part do you feel is villainous? The emotional urge? The gaining of a sense of satisfaction that the wrongdoer is punished? I'm not sure I'd call that villainous, necessarily.

The righteous insistence that people should be hurt for no reason, for no benefit. That seems to me villainous.




It is, I think, linked to a belief in free will, so in that sense if there is actually no free will then we might call that component of it unreasonable, or irrational or illogical perhaps. Possibly. I'm not totally sure if any of those words is fully appropriate.

I never see anyone persuaded of anything in free will discussions. So I shy away from them as unrewarding tedium.

I'm a free willie myself, but I don't care whether others agree.

And I don't see how belief in free will changes anything having to do with punishment.




What I mean is, we might not reasonably or rationally feel those things if we believed or realised that there was no free will. Our reaction would be similar to our reaction if a computer malfunctioned.

Exactly!

If we could punish a computer to help it learn not to malfunction in the future, then maybe we should.

If we could punish a computer to dissuade other computers from malfunctioning, maybe we should.

If we could punish a computer by isolating it so it couldn't spread a virus to other computers, maybe we should.

But, should we punish a computer when there is no benefit at all?

That would be gratuitous cruelty.

But Angra approves of inflicting harm for no benefit in those cases where the harm is -- in some unexplained sense -- "fitting" or "deserved."

This feels to me like esthetic cruelty, as in, "I want to hurt people just for the beauty of inflicting pain," but Angra objects when I use the term "poetic justice." So there may be something here that I just don't understand.

ANGRA: I'm not trying to misrepresent you. I am trying to articulate my understanding in order to give you opportunity to clarify.





That said, if the computer went on fire and burnt down a building, and several people died in the fire, we might then look to see if we could blame the human manufacturers.

In order to accomplish what?

Rehabilitation, deterrence, and isolation come to mind for me.

Retribution, inflicting pointless harm because it's pretty, does not appeal.




I have heard it said that philosophers and others who claim there is free will, of some sort or other, are essentially still looking for someone to blame. It struck me that there might be some truth in that. A world where nothing and nobody is to blame for anything and where everything that goes wrong is more or less an unfortunate, inescapable accident or inevitability does not appeal, perhaps.

It's a point of view.




The sense of loss might be worse than there merely being no god and therefore no objective purpose or meaning to life.

I don't see how gods could have to do with my purpose or meaning, but I think that's for a different thread.
 
Angra's point--assuming I understand--is that retribution is good even when it does harm without benefit.

Ah. well. In that case, I have to say that I did not realise that angra, or anyone, was arguing for 'retribution without benefit'. Are you sure? Angra can clarify.

So let's set deterrence and protection aside (as I tried to do in my Joe and Sara example) and ask if there's still reason to want to hurt people.

I can't think of one.

(I might have slightly preferred 'punish' rather than 'hurt').

As for rehabilitation, I'm not seeing this as a punishment, beyond 'teaching the prisoner a lesson', which it seems to me is deterrence.

We could make the same not-a-punishment argument about the other motivations too. For myself, I'm willing to set those arguments aside and continue to use the word "punishment."

It could be that we are not using the word rehabilitation in the same way. I'm using it in the same sense as 'actively fixing the malfunctioning machine'. Therapeutic interventions and opportunities to study and suchlike. As such, it's not punishment?

There may be other ways of using the word.

The righteous insistence that people should be hurt for no reason, for no benefit. That seems to me villainous.

Ok so, first, I wasn't aware anyone was advocating that.

Second, the word villainous implies a villain. As a free will skeptic, I might be trying to be less morally judgemental than that. In fact many free willies, as you call them, might want to be less judgmental than that. :)

And I don't see how belief in free will changes anything having to do with punishment.

Well, there are the experiments that suggest that weaker beliefs in free will result in lesser retributive urges.

Though to be fair, other experiments suggest that weaker beliefs in free will encourage more cheating.

It appears that the consequences of weaker beliefs in free will might be variegated. :)

Exactly!

If we could punish a computer to help it learn not to malfunction in the future, then maybe we should.

If we could punish a computer to dissuade other computers from malfunctioning, maybe we should.

If we could punish a computer by isolating it so it couldn't spread a virus to other computers, maybe we should.

But, should we punish a computer when there is no benefit at all?

That would be gratuitous cruelty.

But Angra approves of inflicting harm for no benefit in those cases where the harm is -- in some unexplained sense -- "fitting" or "deserved."

This feels to me like esthetic cruelty, as in, "I want to hurt people just for the beauty of inflicting pain," but Angra objects when I use the term "poetic justice." So there may be something here that I just don't understand.

ANGRA: I'm not trying to misrepresent you. I am trying to articulate my understanding in order to give you opportunity to clarify.

I had not realised that angra, or anyone, was advocating for that.

That said, if the computer went on fire and burnt down a building, and several people died in the fire, we might then look to see if we could blame the human manufacturers.

In order to accomplish what?

Rehabilitation, deterrence, and isolation come to mind for me.

Retribution, inflicting pointless harm because it's pretty, does not appeal.

Yes, deterrence, isolation and an opportunity to fix (rehabilitate). Again, I missed that anyone was in favour of retribution for it's own sake.

If they were, I might not call them villainous. I might just say that they either possess, or see as valid in others, normal human urges to rectify injustices. As I said, I think these (natural, non-villainous) urges might dissipate alongside a weakening of beliefs in free will. As a free willie yourself, you might want to dwell on that. You might see a potential opportunity to label people as villains a bit less. :)

I have heard it said that philosophers and others who claim there is free will, of some sort or other, are essentially still looking for someone to blame. It struck me that there might be some truth in that. A world where nothing and nobody is to blame for anything and where everything that goes wrong is more or less an unfortunate, inescapable accident or inevitability does not appeal, perhaps.

It's a point of view.

Sure. I would not have assumed either that the pov was wrong or even necessarily that it was true of all free willies (I like that term and might want to reuse it).

The sense of loss might be worse than there merely being no god and therefore no objective purpose or meaning to life.

I don't see how gods could have to do with my purpose or meaning, but I think that's for a different thread.

As you may have realised, I wasn't talking about you though. You're an outlier. An atheist, I presume, like myself. I’m suggesting the situation would only compare, inasmuch as it does, between those on the one hand whose belief in god weakens and on the other hand, those whose belief in free will weakens, or some of those people.
 
Last edited:
Back
Top Bottom