• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Great Contradiction

quote_icon.png
Originally Posted by Wiploc

Angra's point--assuming I understand--is that retribution is good even when it does harm without benefit.
Ah. well. In that case, I have to say that I did not realise that angra, or anyone, was arguing for 'retribution without benefit'. Are you sure? Angra can clarify.

I could be confused. I've been confused before.

I believe Angra's position is that retribution should be done regardless of whether there is any benefit at all.



quote_icon.png
Originally Posted by Wiploc

So let's set deterrence and protection aside (as I tried to do in my Joe and Sara example) and ask if there's still reason to want to hurt people.
I can't think of one.

(I might have slightly preferred 'punish' rather than 'hurt').

If we could imagine a retribution that didn't involve harm, then I don't see how I could object to it. We could do that sort of "punishment" to everyone all the time.




quote_icon.png
Originally Posted by Wiploc

As for rehabilitation, I'm not seeing this as a punishment, beyond 'teaching the prisoner a lesson', which it seems to me is deterrence.



We could make the same not-a-punishment argument about the other motivations too. For myself, I'm willing to set those arguments aside and continue to use the word "punishment."
It could be that we are not using the word rehabilitation in the same way. I'm using it in the same sense as 'actively fixing the malfunctioning machine'. Therapeutic interventions and opportunities to study and suchlike. As such, it's not punishment?

Isn't that the primary goal of reformatories and penitentiaries?



quote_icon.png
Originally Posted by Wiploc

The righteous insistence that people should be hurt for no reason, for no benefit. That seems to me villainous.

Ok so, first, I wasn't aware anyone was advocating that.

I think Angra would claim there is benefit. But not rehabilitation, not isolation, not deterrence. No benefit I can recognize. Angra probably thinks harm is benefit when is "fitting," or when the victim "deserves" it.

But he has not pointed to any benefit, aside from claiming some puzzling concept of "justice."




Second, the word villainous implies a villain. As a free will skeptic, I might be trying to be less morally judgemental than that. In fact many free willies, as you call them, might want to be less judgmental than that. :)

Granted.

I certainly don't think of Angra as a villain. But, like Typhoid Mary, he has fallen into error.

I'm striving for clarity here. If my mother was a Nazi, I would think that was bad.

I think racism is bad, and I think people who promote racism are bad to the extent that they do so.

Why is racism bad? Because it hurts people; it makes them unhappy.

And, likewise, it is bad to promote hurting people for no good reason, for no reason at all aside from the desire to strike out at someone blamed. Hurting people without benefit is bad; it reduces the world's happiness.

To the extent that anybody promotes harm without benefit, that person is being bad.

I want to be clear about that. I don't want to lose that clarity by saying "punish" rather than "harm," or "person I sometimes disagree with" rather than "villain."

Angra is no villain, but I hope it is fair to describe the promotion of pointless harm as "villainous."
 
Last edited:
If P2 did not obtain, then weird things would, like mistranslations of moral terms between two different languages, or even between two communities using apparently the same language, or other weird things. Since those are extraordinary claims, the burden would be on the claimants (long ago, I used to think there was a good case from (apparent) disagreement to miscommunication, but now I think the case is weak).

I'm not sure what you're saying there.



To go back:


P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.


As you say, P1 & P3 seem fine.

I think I'd be happy to say that P2 seems fine also, as a hypothetical (possibly with some small caveats about what 'proper functioning' means) but I'm still confused as to what that implies. It seems to imply that a particular morality is essentially subjective, not universal. All that would be universal (with allowable exceptions) is that humans have some subjective sense of morality, which varies from human to human.

It seems to me that this is because no two humans are entirely the same.

In other words, it seems to me that P2 should be amended to:

P2: If two identical humans have properly functioning moral senses, they will yield the same outputs given the same inputs.

Then the reason that there is no particular universal morality (beyond the universaliity that there is some morality) would be because now, P2 does not actually pertain in the world. Or, that if it did pertain (I should not perhaps entirely rule it out) then that would only be in the exceptional case of those two humans, not humans in general. In general, we could say that it is universally true (again with allowable exceptions) that morality varies in humans.

I am using the expression 'properly functioning' in the usual sense of the words (of course, you could challenge that there is such a sense and suggest we are talking past each other; since you haven't raised that issue I will go on unless you say otherwise). For example, if I get conjunctivitis, then my eyes are not properly functioning. Similarly, if I'm short sighted, my eyes are not funcioning properly, but malfunctioning to different degrees. Similarly, if I get a broken arm, then some of my muscles will not function properly, and so on. The same goes for the brain/mind. There are plenty of different ways in which a human brain/mind can malfunction.

P2 says that if two human brains/minds get the same inputs to make a moral assessment, then their verdict (about e.g., who behaved immorally, how much so) is the same unless one of them has a malfunctioning moral sense. Note that the inputs might differ even if the two people are given the same description of events, because the inputs are the morally relevant aspects of a situation, and may well involve other beliefs that those two people have and are used when interpreting what happened.

I'm not sure what you mean by a particular morality being essentially subjective, not universal. Rather, the sense would not vary from human to human. Moral disagreements - e.g., disagreements about whether, say, it was morally wrong of Warren to accuse Sanders of telling her that a woman could not win the election - would be either the result of different inputs received by the respective moral senses (e.g., input 1: Warren lied; input 2: Warren did not lie), or the results of malfunctioning of the moral sense (with a bit of tolerance, I think, because I think that's the case with nearly all or all of our language).

The fact that no two humans are exactly the same is not the issue. For example, if two humans look at the desk in front of me know, they will both see a mouse on it, unless at least one is has a malfunctioning visual system. Similarly, they will also see headphones. And they will see that the mouse is black and the headphones are red. You don't need humans to be identical for that. Similarly, if two humans see two bears entering a cave with one exit (which they checked was empty before) and then one bear getting out, they will reckon there is one bear in the cave (perhaps injured or dead, but one). If one of them does not reckon that there is one bear left, then something is malfunctioning (I'm talking adult humans, not a newborn).

The alternative P2 does not do the job - it's a very different hypothesis.
 
fromderinside said:
I thought I made it clear that morality and ethics come from two distinct and different systems. One is moral on some scale. If morality is a sense there are tugs and pushes on behavior. It comes with being a human. Ethics is a human created system of rules. One applies ethics on some scale consciously. I'm petty sure being and doing are different things. You'd need one argument P for morals and another argument Q for ethics.
You are using the words in an unusual manner. In the usual sense, in English the terms 'immoral behavior', 'morally wrong behavior', and 'unethical behavior' all mean the same, so I use the same argument.
 
Second, I explained that those studies are in fact not relevant, because the problem is that incompatibilists have a mistaken theory about the meaning of the words. This isn't about neuroscience. It's about the meaning of expressions such as 'of one's own free will'.

As I have already said, if you want to effectively call the tip of an iceberg an iceberg, I can't stop you. It neatly avoids there being a contradiction. And yes, neuroscience would have no bearing on the matter, except in the sense of possibly being able to help explain why you would want to do that. :)

If you can provide a valid argument that words in the usual sense of the words (an argument that includes of course your claims about the usual sense of the words as premises, if you are going to use them to reach the conclusion), I'm all ears. :)

ETA: I consider the argument you offered in a later post.
 
Last edited:
Wiploc said:
But, should we punish a computer when there is no benefit at all?

That would be gratuitous cruelty.
Is the computer the sort of thing that can be punished?
If it is, is it the sort of thing that can deseve to suffer for its actions?
If it is, then it is not gratuitous, or cruel: it deserves it.

Wiploc said:
This feels to me like esthetic cruelty, as in, "I want to hurt people just for the beauty of inflicting pain," but Angra objects when I use the term "poetic justice." So there may be something here that I just don't understand.
It's not poetic. It's justice without the qualifier. Because they deserve it. And it's not because of the "beauty of inflicting pain". It's because they deserve it (it does not need to involve pain, though it might depending on the case).


Wiploc said:
I believe Angra's position is that retribution should be done regardless of whether there is any benefit at all.
No, that is not my position. My position is that just retribution is, in an of itself, a benefit. It is something good, it is justice.

On the other hand, while the passive voice makes it difficult to figure who you're talking about, as I said I do not believe it is always morally permissible - let alone obligatory or praiseworthy - to exact just retribution, because as always, there are other factors to be considered, such as likely consequences to innocent third parties in case of war, revenge from the friends of the punished wrongdoer or the wrongdoer, etc.
Wiploc said:
If we could imagine a retribution that didn't involve harm, then I don't see how I could object to it.
Well, if it's negative retribution (not positive retribution for morally praiseworthy actions), it involves harm, but not necessarily pain.

Wiploc said:
Angra probably thinks harm is benefit when is "fitting," or when the victim "deserves" it.
No, it's when it's fitting, and the perpetrator deserves it; it's without the quotation marks.


Wiploc said:
But he has not pointed to any benefit, aside from claiming some puzzling concept of "justice."
Why would an ordinary common concept be puzzling?
Ordinarily, humans talk about what punishments other humans (or even themselves) deserve for their actions. They talk about just punishment. And so on. I take it you learned those terms too, just as you learned other moral terms, like 'unethical', 'moral obligation', 'morally good person', and so on, or just like you learn other terms in English (not limited to moral ones). Why do you single out the very common concept of justice as puzzling?

Wiploc said:
I certainly don't think of Angra as a villain. But, like Typhoid Mary, he has fallen into error.
No, you have.

Wiploc said:
And, likewise, it is bad to promote hurting people for no good reason, for no reason at all aside from the desire to strike out at someone blamed. Hurting people without benefit is bad; it reduces the world's happiness.
Yes, that would be bad. The desire to strike out at someone blamed alone is not enough. A necessary requirement is that, on the basis of the information available to the person who intends to punish, the person that is punished deserves to be hurt in that manner, because of their actions.

Wiploc said:
Angra is no villain, but I hope it is fair to describe the promotion of pointless harm as "villainous."
You are no villain, either, but it is unethical to promote the belief that those who promote just retribution are behaving villainously.
 
ruby sparks said:
Oh what the heck, I'll try some logic. It might be fun. It's not my thing and I don't think it can sort this issue out.

P1. An X is a type of Y
P2. Something is fully X
P3. Therefore that something is fully Y
That requires implicit premises about the meaning of 'fully' and 'type of', which seem uncontroversial in this case, but a cleaner form would be:


P1'. For all A, if A is X, then A is Y.
P2'. B is X.
P3'. Therefore B is Y.

No matter, let us go.


ruby sparks said:
For example:

Blue is a type of colour
An object is fully blue
Therefore the object is fully coloured

A sheep is a type of mammal
An entity is fully a sheep
Therefore the entity is fully a mammal
The additions of 'a type of' and 'fully' seem superfluous, and obscure the arguments. Still, under some assumptions about the meaning of the words that appear fairly innocent at first glance (though I haven't studied them for long), they seem to hold. However, in an argument where the assumptions are controversial, this sort of thing could be a problem. No matter, let us consider the real deal:

ruby sparks said:
A prior determination is a type of constraint regarding making free will choices
An entity is fully prior determined
Therefore that entity is fully constrained regarding making free will choices
What does 'a prior determination' mean?
The hypothesis that the universe is deterministic means, in a formulation that I think is common, that it follows from the laws of nature (whatever those are) and the state of the universe at some time t, what the state will be at any time t'>t. However, it is unclear what 'a prior determination' would mean in that context. There are alternative formulations in terms of causes (even by philosophers who reject the notion of laws of nature), and I do not know which one you prefer. But in any case, the first premise appears ambiguous at best.

Now, if by 'a prior determination' you mean a previous cause, in the usual sense of the words, then it is of course false. Some of the causes of my deciding to write this - of my own free will - are my thinking about the matter and my desire to defend the correct position. Those, however, are not constraints, in any relevant sense of the word 'constraint'. Specifically, those causes do not restrict the freedom of my choice, and are instead part of the process by which I make said free choice.

ruby sparks said:
As I said, I think that formal logic is going to be of limited use here, just as it would be for, say, deciding if we really are evolved from apes for example. But it seems we have a fan of formal logic rather than science here.
I'm a fan of both. But which one you should use depends on the sort of claim that you are making. If you are claiming contradiction, then it's logic. If you are claiming an empirical assessment, then the latter. If you are claiming sometimes one and sometimes the other (as you are), then you should think about the matter more carefully, stop being inconsistent, and then choose the one you reckon applies.
 
Wiploc said:
I tried to construct a pure-retribution scenario.
You did it poorly, I'm afraid. As I made it clear, I am talking about just retribution. I'm talking about people who deserve it. You made a scenario in which the daughter is targeted. And you did not even give a reason to suspect that Sara herself deserves to be killed, let alone her daughter, who is clearly not guilty of anything.

Bomb#20 is making an assessment on the basis of the sort of reply you post, which definitely paints retributivists in the worst possible light, but presenting a case of clearly, evidently unjust vengeance. Now, I do not think that you did that deliberately, but you did that.

I believe you do understand the difference between unjust revenge and just retribution (or revenge if you like, but only if you forget the negatively loaded component in a common usage of the word 'revenge').

Take a look at the following post. There you can find an example, which you might or might not agree with, but might hopefully make you realize that you do understand the difference between just retribution and an unjust attack on the innocent.

https://talkfreethought.org/showthr...-Contradiction&p=758069&viewfull=1#post758069

Wiploc said:
So let's change it to cars rather than daughters. Or let's make Sara's daughter already "deserving" of death from a separate incident.
Changing to cars would work, as long as you also change what happened, so that Sara deliberately wrecked Joe's car. Changing to Sara's daughter "deserving" of death would not work, because:

1. It's about deserving, not "deserving". The quotation marks ruin it.
2. Even if Sara's daughter deserves to be killed for a separate incident, it would be unjust to kill her for what Sara did.
 
Last edited:
ruby sparks said:
I don't think you are exploring the matter thoroughly and taking all the determinants into account during the sequence of events.
If by that you mean all of the causes, of course I am not. It would not be possible (to many, since the Big Bang), and in any event, not relevant. Else, what do you mean by "determinants"?

ruby sparks said:
For example, and just for starters, why did you even try to think of anything at all when he said 'cola'? Why did you even just do that at all? Temporarily set aside, for the moment, that you believe that at some point after that, you did something of your own free will. Just focus on what happened when he said (or to be precise you read) the word 'cola'.
I did not try to think anything at all by then. Rather, I freely chose to read his post, and 'cola' was part of a sentence, but I read the whole sentence ' If I said cola you'd probably say Coke or Pepsi.', and after that, of course I had by then thought of Coke and Pepsi, because they were words in the sentence I had just read. Then I decided to look up a cola, in order to give a reply to his post that was not what he predicted. It wasn't the first that came to mind, either.

So, what happened was nothing. I read the sentence to fast to start thinking of brands mid-sentence.

However, I think I can give an answer of the sort you are looking for if I move from 'cola' to 'car', which he also said. The first name I thought of was 'Tesla', probably because I had read an article about Tesla (now that I think of it, it was a post mentioning an article). Now, I did not choose to think 'Tesla' of my own free will. I chose of my own free will to read his sentence. One of the many effects was that I thought 'Tesla'. That was not a free choice on my part. Nor was it a coerced choice. Rather, it was not a choice at all. It just happened. Then, I chose - of my own free will - to look for other names, because I intended to reply as I did: namely, with a name that I had never heard before, in order to address his point that I would say some brand I was conditioned to say.

So, when I decided to look for a name I never heard before, so I was thinking for a second 'what can I look for?', and one of the things that for some reason popped into my head was 'look up chinese electric car makers'. Why? I do not know. That part was not my choice. Then I did decide it would work (i.e., it would give me what I wanted, a name of a car that I hadn't heard before), so I looked that up. The first one on the Wikipedia list that I had not heard before was Dongfeng. Since it did what I wanted I decided of my own free will to post that name.

In short, our mental life involves both free choices and things that are not free choices but just happen. Those do not make our choices less free, or unfree. That's all over the place.

When I chose (freely) to, say, try to solve a difficult (for me at least) math problem, I deliberately choose for example to think about the matter, to dedicate time to it, etc., but I expect that my unconscious thought processing will yield the results of the computations into my conscious mind - and they do. Again, our thought processes involves free choices and unconscious processing all the time, intertwined. But this is not a problem for freedom.


ruby sparks said:
Tangentally (and you personally may not be interested in this given that you eschew neuroscience here, but others may be interested) it's possible, I believe, that you may have registered the word 'cola' non-consciously before it entered your consciousness.

1. I do not eschew neuroscience. It is interested for its own sake and for other reasons, but it is being misused here (which you should know, since you are being inconsistent and both Bomb#20 and I have shown that more than once).

2. Sure, it's possible. But not relevant - as you should know, given that you argue for a contradiction.

ruby sparks said:
To me, one of the fascinating things about what neuroscience (and bearing in mind that neuroscience is possibly still only in its infancy) can bring to this issue is the ability to scientifically measure stuff happening at very very short timescales that we can't otherwise appreciate or discern. For example, it takes, say, a millisecond for a single electrochemical impulse to get across from one part of your brain to another. Experiments suggest that you can non-consciously register (and respond to) an image of a face, for example, in a time about 50 times longer than that (50 milliseconds) because, it seems, of trillions of impulses crisscrossing your brain during that longer time. But apparently it takes 10 times that long (500 milliseconds, ie the trillions of impulses have been travelling back and forth for 500 times the length of time it takes for each journey) for the recognition to become consciously experienced. It appears that your 'conscious now' is actually, always and inevitably what happened in the past, albeit a very short time ago. This makes sense, because it involves processing and that takes time.
The neuroscience is very interesting.
However, my 'conscious now' did not happen in the past. It had causes in the past of course. But then, everything does. The fact that, say, a missile that explodes and kills a bunch of people obviously had prior causes (even it was fully determined if you like) does not change the fact that the missile caused the death of a bunch of people. Prior causes do not take away the missile's causation. And it does not mean that the explosion happened in the past. The causes of the explosion happened in the past. But that's not the point, the explosion still happens and is causally effective. So are my conscious choices. And the fact that they have causes - or that they are determined by previous events + the laws of nature, or whatever - also do not make them less free - why would it?

Again, I do not disagree with you about the empirical findings. Nor do I eschew them or ignore them. Rather, I disagree with you about the meaning of the words.
 
Angra's point--assuming I understand--is that retribution is good even when it does harm without benefit.
Ah. well. In that case, I have to say that I did not realise that angra, or anyone, was arguing for 'retribution without benefit'. Are you sure? Angra can clarify.

According to Angra, (and Bomb#20) the objective of retributive justice is not to make society safer or make restitution to victims. From earlier in this thread:

Justice is not a means to an end - at least, not primarily. It is an end. The goal of exacting just retribution is that the perpetrator gets what he deserves. It is a good, just thing that he gets what he deserves. That is the benefit, not a further one - well, it may also have other benefits, like promoting social peace, but that is not the main reason.
I find it quite chilling.
 
If P2 did not obtain, then weird things would, like mistranslations of moral terms between two different languages, or even between two communities using apparently the same language, or other weird things. Since those are extraordinary claims, the burden would be on the claimants (long ago, I used to think there was a good case from (apparent) disagreement to miscommunication, but now I think the case is weak).

I'm not sure what you're saying there.



To go back:


P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.


As you say, P1 & P3 seem fine.

I think I'd be happy to say that P2 seems fine also, as a hypothetical (possibly with some small caveats about what 'proper functioning' means) but I'm still confused as to what that implies. It seems to imply that a particular morality is essentially subjective, not universal. All that would be universal (with allowable exceptions) is that humans have some subjective sense of morality, which varies from human to human.

It seems to me that this is because no two humans are entirely the same.

In other words, it seems to me that P2 should be amended to:

P2: If two identical humans have properly functioning moral senses, they will yield the same outputs given the same inputs.

Then the reason that there is no particular universal morality (beyond the universaliity that there is some morality) would be because now, P2 does not actually pertain in the world. Or, that if it did pertain (I should not perhaps entirely rule it out) then that would only be in the exceptional case of those two humans, not humans in general. In general, we could say that it is universally true (again with allowable exceptions) that morality varies in humans.

I am using the expression 'properly functioning' in the usual sense of the words (of course, you could challenge that there is such a sense and suggest we are talking past each other; since you haven't raised that issue I will go on unless you say otherwise). For example, if I get conjunctivitis, then my eyes are not properly functioning. Similarly, if I'm short sighted, my eyes are not funcioning properly, but malfunctioning to different degrees. Similarly, if I get a broken arm, then some of my muscles will not function properly, and so on. The same goes for the brain/mind. There are plenty of different ways in which a human brain/mind can malfunction.

P2 says that if two human brains/minds get the same inputs to make a moral assessment, then their verdict (about e.g., who behaved immorally, how much so) is the same unless one of them has a malfunctioning moral sense. Note that the inputs might differ even if the two people are given the same description of events, because the inputs are the morally relevant aspects of a situation, and may well involve other beliefs that those two people have and are used when interpreting what happened.

I'm not sure what you mean by a particular morality being essentially subjective, not universal. Rather, the sense would not vary from human to human. Moral disagreements - e.g., disagreements about whether, say, it was morally wrong of Warren to accuse Sanders of telling her that a woman could not win the election - would be either the result of different inputs received by the respective moral senses (e.g., input 1: Warren lied; input 2: Warren did not lie), or the results of malfunctioning of the moral sense (with a bit of tolerance, I think, because I think that's the case with nearly all or all of our language).

The fact that no two humans are exactly the same is not the issue. For example, if two humans look at the desk in front of me know, they will both see a mouse on it, unless at least one is has a malfunctioning visual system. Similarly, they will also see headphones. And they will see that the mouse is black and the headphones are red. You don't need humans to be identical for that. Similarly, if two humans see two bears entering a cave with one exit (which they checked was empty before) and then one bear getting out, they will reckon there is one bear in the cave (perhaps injured or dead, but one). If one of them does not reckon that there is one bear left, then something is malfunctioning (I'm talking adult humans, not a newborn).

The alternative P2 does not do the job - it's a very different hypothesis.

It seems to me you are assuming the word 'identical' and that it needs to be included. Otherwise, your P2 does not work, as I see it.

To try to demonstrate, consider the following as regards P2: if you provide the same inputs to two machines, but the two machines are only very similar but not identical, the outputs will not be the same.

'Properly functioning' does not seem to cover it. Two very similar but slightly different machines (which I am suggesting is what human brains are) could both be properly functioning. In short, 'properly functioning', I'm suggesting, has wiggle room. Identical does not.

Unless you are assuming the conclusion at the outset, that there is a universal morality. In that case, yes, only one of the machines would be functioning 'properly' (in line with the supposed universal morality). But we don't know if it's the case, and that's a problem. It could just be that the two machines differ. End of. Personally, I suspect that this is the case, partly because I think morality is a mental construct that does not exist in the natural world. And if I were to temporarily use your approach back at you, I might ask you to demonstrate to me that that is false.

Apologies for not replying in detail to all your other points. I did read them carefully and there are things I could try to say, but I am trying to focus on what I think might be key. I have a day's work to start soon and am a bit short of time. If we were having a chat in a pub over a pint or two (an excellent setting for philosophical discussion in my opinion and much better than typing conversations into little boxes) I feel sure we could do it very enjoyably for hours on end. I like the rigour of your intellectual approach and I admire your ability to distil statements and claims into logic, albeit I am not sure if logic is the best tool to use to try to solve this problem.

I may even, later, if I can, revisit your post. Perhaps there is something you are saying that I am not understanding.
 
Last edited:
I'm a fan of both. But which one you should use depends on the sort of claim that you are making. If you are claiming contradiction, then it's logic. If you are claiming an empirical assessment, then the latter. If you are claiming sometimes one and sometimes the other (as you are), then you should think about the matter more carefully, stop being inconsistent, and then choose the one you reckon applies.

Hm. I'm not sure if it's inconsistent to deploy two tools for the same job :)

Perhaps some of both are required, two inputs as it were. This is often the case for jobs of all sorts. If you want to build a house, you need someone to draw up the plans and you need a builder, and you may even find both useful, literally simultaneously, during a particular part of the job, because both their inputs can inform each other.

Another partial analogy would be saying that biologists, physicists and chemists should not collaborate.

Let me, before I stop procrastinating here in this enjoyable discussion and get on with paid work, ask you a question. Go back to a time before Darwin. How could the question of whether or not we evolved from apes have been answered with logic, definitions and semantics? Or how could antisocial behaviour have been properly explained without genetics? The science needs to inform the philosophy, surely (and vice versa). In other words, it seems to me that this issue is, in a very fundamental and profound and important way, about more than just logic and/or the meaning of words. I worry that that part is really just a diverting game of sorts, in the end, and possibly a diversion away from using (or also using, in tandem) another tool, for at least more of the job. Or worse, it may be a false dichotomy to say that we can or should wholly separate, and use separately, the two sorts of input, the two tools.

In other words, perhaps you, somewhat ironically, by insisting on separate domains, are the one claiming a non-existent contradiction/inconsistency. :)



And just to note: I retracted my claim that there was a contradiction. I am now (a) having trouble working out how to construct an argument in logic to back up my alternative claim that you are mislabelling (calling the tip of an iceberg an iceberg) and (b) not even yet convinced I am obliged to do that. It seems to me an empirical fact that you are (at least it's an analogy for what I think you are doing) and yes I am trying my best to use your meaning of words when I say that.
 
Last edited:
Angra,

Another way I try to point up a potential shortcoming in relying heavily on definitions in such things is (was, I tried it earlier in the thread) to post a photo of part of the universe and say that it is a photo of (at least part of) god. In order to avoid a contradiction, I have merely defined god to be 'the natural universe'. Ergo, god exists. I think my semantics, my definitions and my logic are watertight. But.......in a way, so what?

In other words, I am suggesting that even if you can get past my claim that you are mislabelling (which remains to be seen, imo, and I query that the burden is necessarily on me, or indeed anyone, to couch the claim in formal logic) the next obstacle I am going to put in your way is that your label is effectively redundant and/or meaningless.

It could still be pragmatic. I'm not saying a belief in free will is not pragmatic.

I hope to get back to you on your other posts later.
 
Last edited:
According to Angra, (and Bomb#20) the objective of retributive justice is not to make society safer or make restitution to victims. From earlier in this thread:

Justice is not a means to an end - at least, not primarily. It is an end. The goal of exacting just retribution is that the perpetrator gets what he deserves. It is a good, just thing that he gets what he deserves. That is the benefit, not a further one - well, it may also have other benefits, like promoting social peace, but that is not the main reason.
I find it quite chilling.

I'm not sure I do. I do think I know what you mean though. I think. Not sure.

If, for example, what is being said here is that the sense of justice having been done is, of itself useful (psychologically for the wronged person and by extension for the society with which that person interacts) then that might be neither villainous or chilling. It might be flawed, if for example it is based on an incorrect assessment of whether the wrongdoer had free will or not, and there might potentially be a better approach if beliefs in free will weakened or disappeared, but in the meantime it still might function pragmatically in a useful or good, albeit imperfect way.

And then, and temporarily limiting things to pragmatics (and consequences) the issue may be more the extent or severity of the retribution, or other aspects of it (deterrence, isolation, etc) not the principle of retribution itself.

I also wonder if this is closely related to the idea of universal morality? Would someone who asserts that there is 'universal morality' also tend to say that there is a 'natural justice' (of the retributive sort)? Although that claim, if made, would seem to be a different sort of claim to saying that retribution is good/useful pragmatically or in terms of consequences.

I'm not sure about any of the above. But it's certainly a good discussion we're all having here, in my opinion.

Although if someone asks me to frame the claim (that the discussion is good) in formal logic, or even if I'm asked to retract that statement until I've properly defined 'good', I may have a slight hernia. :)
 
Last edited:
I thought this might be useful, even if only as visual light refreshment:

[YOUTUBE]https://www.youtube.com/watch?time_continue=26&v=ssSN2Jrqqlk&feature=emb_logo[/YOUTUBE]

It's arguably a decision-making agent acting of its own accord, I think.

Albeit it does not have learning capacity (it will not function better the next time). I believe some more sophisticated robots (and computer programs) even currently, have that capacity.

Nor does this robot run hypothetical (future) scenarios in order to make decisions, as the computer program 'Deep Blue' does. I think I'm correct in saying that Deep Blue also had the aforementioned learning capacity (in that new information regarding additional future hypotheticals is integrated into its next performance). But I'm not sure about that. Perhaps all its 'possible future hypotheticals' from any given scenario presented to it were loaded initially and fixed thereafter (unless reprogammed externally).

What I am suggesting, as a free will skeptic, is that the difference between the robot in the video and the human brain is merely one of complexity, but that nonetheless and crucially, it remains as true for both that neither can freely choose to do (and quite possibly cannot even merely do) otherwise that what it does in a given situation.

Somehow endow the robot in the video with consciousness and it might report that it freely chose, but by the time it is able to report it, it has already done it, partly because of the time lag between events and conscious awareness of them, and perhaps also the time taken to make the report. Thinking 'I intend to do this a moment after me thinking this' does not get around the problem and is quite a rare event in any case. We do lots of things all the time while conscious without having that particular thought beforehand.

All that, if true, if consciousness is 'too late on the scene' to affect the particular choice it is reporting, begs the question of why there is consciousness at all. What is the function of it, in other words. But there are suggested answers for that that do not involve free will.
 
Last edited:
I also wonder if this (retributivist morality) is closely related to the idea of universal morality?
If by "universal morality" you mean objective morality ( Moral universalism) then, yes, I think they probably are closely related.

I am probably not sure of the difference between the two, as regards how the former term (universal morality) is being used here.

Googling that wiki page, that seems to be what is being talked of here.

It seems to me that what some of these ideas are assuming and what, for example, Angra is saying, is that there is a hypothetical 'standard human' (sort of like there may be a standard carpet vacuuming robot in a way). I wonder if the idea of 'proper functioning human' comes from this.

If so, I would ask, is a person with
 
Last edited:
Whoops, belated editing issue.

To repeat:

It seems to me that what some of these ideas are assuming and what, for example, Angra is saying, is that there is a hypothetical 'standard human' (sort of like there may be a standard carpet vacuuming robot in a way). Indeed, certain things I read in philosophy almost seem to go further and assume a hypothetical 'standard rational human being', but we can leave that aside for now, I think.

I wonder if the idea of 'proper functioning human' comes from this.

It seems a potentially dubious idea to me.
 
Is the computer the sort of thing that can be punished?
If it is, is it the sort of thing that can deseve to suffer for its actions?
If it is, then it is not gratuitous, or cruel: it deserves it.


It's not poetic. It's justice without the qualifier. Because they deserve it. And it's not because of the "beauty of inflicting pain". It's because they deserve it (it does not need to involve pain, though it might depending on the case).

I can't figure out a way to agree.



Wiploc said:
I believe Angra's position is that retribution should be done regardless of whether there is any benefit at all.
No, that is not my position. My position is that just retribution is, in an of itself, a benefit. It is something good, it is justice.

I want to thank you for your tone and attitude in this discussion, your patience with my attempts to represent your opinions to others.

You think that retribution, in the total absence of any other benefit, is still in itself a benefit.

My own position is that I totally don't get that. If I see no "benefit" other than retribution, I see no benefit at all. In my attempt to convey this perception, I wrote that you favor retribution regardless of whether there is any benefit at all.

That is how I see it, since I don't see retribution as a benefit in and of itself, but it is not how you see it.

I appreciate your tolerance.



On the other hand, while the passive voice makes it difficult to figure who you're talking about, as I said I do not believe it is always morally permissible - let alone obligatory or praiseworthy - to exact just retribution, because as always, there are other factors to be considered, such as likely consequences to innocent third parties in case of war, revenge from the friends of the punished wrongdoer or the wrongdoer, etc.
Wiploc said:
If we could imagine a retribution that didn't involve harm, then I don't see how I could object to it.
Well, if it's negative retribution (not positive retribution for morally praiseworthy actions), it involves harm, but not necessarily pain.

Harm, pain, unhappiness, suffering, etc.. No need to split hairs on this point at the same time as we try to split hairs on another.



Wiploc said:
Angra probably thinks harm is benefit when is "fitting," or when the victim "deserves" it.
No, it's when it's fitting, and the perpetrator deserves it; it's without the quotation marks.

I can't give you that. If I take away the quotation marks, then I'm granting you your point by making the words meaningless.

It's like when I grant Tanager that his gods would have objective moral authority over us if they did in fact have objective moral authority over us. I can't know what Tanager means by "objective" or "authority" in my own sentence. Those are weasel words, usually. Their meaning often changes from one part of a sentence to another. What I'm granting is an undefined truism.

But, back to our own discussion, I don't see how harm can be fitting or deserved if it doesn't accomplish anything. And I don't see how harm (retribution) without side effects (like rehabilitation) can be fitting or deserved.

It's pure harm, with no benefit. It is unadulterated badness.

So the scare quotes around "fitting" and "deserved" let me use your words without agreeing that I understand what you mean by them in that context.



Wiploc said:
But he has not pointed to any benefit, aside from claiming some puzzling concept of "justice."
Why would an ordinary common concept be puzzling?
Ordinarily, humans talk about what punishments other humans (or even themselves) deserve for their actions. They talk about just punishment. And so on. I take it you learned those terms too, just as you learned other moral terms, like 'unethical', 'moral obligation', 'morally good person', and so on, or just like you learn other terms in English (not limited to moral ones). Why do you single out the very common concept of justice as puzzling?

Let me quote Thomas McCormack in The Fiction Editor:

Just roll past the here-undefined word "cluster." Defining that wouldn't help us with our discussion.

--begin McCormack quote--

The cluster twinkles like a Disney Christmas with
familiar, genial-seeming terms. But in fact it's a
source of confusion, frustration, misconception,
and miscarriage.

The confusion is betrayed initially by the cluster's
disordered vocabulary. Ever since Aristotle first
groped into its occulting gloom, commentators
have failed to agree on a consistent lexicon:
Philosophers, teachers, critics, and writers of
how-to-write books have reinlessly used words
like 'plot', 'story', 'structure', 'situation', 'theme',
'premise', 'proposition', 'crisis', 'catharsis', 'resolution'--
and on through an attic of jumbled and overlapping
terminology.

Underlying this verbal pandemonium is, predictably,
conceptual chaos: The words arise from idea that are
blurred and rimless.

This comes perilously close to saying that something
essential to discuss is essentially undiscussable, but
I have to bear-dance into it anyway because it's a
crucial area that editors often mention, rarely think
through, and never adequately understand, and
my assignment is to make the case that something
can be done about this.

--end McCormack quote--

McCormack is talking about literary terminology, but his point is true in many other areas. It's amazing that we can communicate at all.

I think retribution was probably sometimes on-balance good back before we thought about it meant. But when I read that the four reasons for punishment are rehabilitation, isolation, deterrence, and vengeance, that made sense to me.

It was like discovering that water is made of hydrogen and oxygen. Or of hydrogen, oxygen, and pollutants, if we punish partly for vengeance.

When we dissect punishment, vengeance is the road-rage part, the malice, the desire to hurt out of anger and self-righteousness. It is the bad part.

I can see how retribution could be sometimes-good back before we identified the parts. Because it had the valuable sometimes side-effects of rehabilitation, isolation, and deterrence.

You think "justice" is an ordinary concept that everyone should understand. I think it is controversial, blurred and rimless, that has been disputed by experts for millennia.

And I don't see any way -- once we set aside rehabilitation, isolation, and deterrence -- that what is left can ever be fitting, deserved, or just.

As far as I can see, retribution -- as distinct from rehabilitation, isolation, and deterrence -- is mere cruelty, wearing a self-righteous mask.
 
Back
Top Bottom