• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Great Contradiction

If he can't do otherwise, then we can't either. In which case, we cannot change the amount of blame or retribution.

In my opinion it would be better to say we can’t freely will a change in the amount of blame or retribution.

In whatever way it comes about (let’s say it’s determined) once the system that calls itself you becomes aware of (receives informational input about) the apparent state of affairs (that he/you/the murderer couldn't have done otherwise, it would seem) then that knowledge, or if you prefer let’s call it a belief, can be causal for a change, in this case of your perspective, and thus possibly your behaviour, even if only a change of emphasis or an intermittent change (because it's counter-intuitive to your other or prior beliefs).

Not entirely unlike, though possibly more intractable and challenging than, the time you (the system that calls itself you) learned or decided (or came to believe) that you/it are an ape descended from the same ape ancestor as monkeys, or the time you/it decided there was probably no god. When I say you I mean people (human systems) generally. Both of those realisations/beliefs are/were arguably momentous, for human understanding of what it actually, really might be (appears likely to be) to be human. Now we are discussing another one.

Yet another one involves our idea of self. That one is related to our sense of our agency and as such to the topic of free will. There is a decent case that can be made that self is a system illusion.
 
Last edited:
In post 241, Ruby Sparks wrote:

quote_icon.png
Originally Posted by Wiploc

If he can't do otherwise, then we can't either. In which case, we cannot change the amount of blame or retribution.

In my opinion it would be better to say we can’t freely will a change in the amount of blame or retribution.

In whatever way it comes about (let’s say it’s determined) once the system that calls itself you becomes aware of (receives informational input about) the apparent state of affairs (that he/you/the murderer couldn't have done otherwise, it would seem) then that knowledge, or if you prefer let’s call it a belief, can be causal for a change, in this case of your perspective, and thus possibly your behaviour, even if only a change of emphasis or an intermittent change (because it's counter-intuitive to your other or prior beliefs).

Not entirely unlike, though possibly more intractable and challenging than, the time you (the system that calls itself you) learned or decided (or came to believe) that you/it are an ape descended from the same ape ancestor as monkeys, or the time you/it decided there was probably no god. When I say you I mean people (human systems) generally. Both of those realisations/beliefs are/were arguably momentous, for human understanding of what it actually, really might be (appears likely to be) to be human. Now we are discussing another one.

Yet another one involves our idea of self. That one is related to our sense of our agency and as such to the topic of free will. There is a decent case that can be made that self is a system illusion.

That's an excellent response. You're not actually saying that we have free will, but you nonetheless think our actions can be somehow informed as if we have free will.

And the response, of course, is that we could use similar locutions to point out that the serial rapist and murderer's actions can be informed by our blame and retribution. Thus, while he may not have free will, it may be beneficial to behave as if he does.
 
Another momentus realisation/discovery/belief, momentus for at least some humans at the time, was that earth is not only not the centre of the universe, but is an insignificant speck almost literally of no actual importance at all.

That one has since been confirmed by much evidence and observation, a bit like the 'ape descended from an ape' one. But not like the 'no god' one, to which it is tangentally related. As to free will (and indeed self) neuroscience and genetics are starting to make inroads and it is mostly from there, and not from pure (ie unempirical) philosophy, that the challenges to our ideas of free will (and self) have mostly been coming lately.

They are arguably all examples of (or in the case of a lack of free will a potential example of) 'big shocks/blows to the system'.

What has tended to happen, as we know, is that if it becomes necessary or if not acknowledging becomes difficult, humans tend to recover from the 'bad news' in each case and that life goes on, in some ways more or less as it did before.
 
Last edited:
You're not actually saying that we have free will, but you nonetheless think our actions can be somehow informed as if we have free will.

It is very, very complicated, and hard to find the best words for, and I'm not claiming to understand it. And almost as important if not more so, I do not pretend to know what the consequences of humans losing their belief in free will would be (though I don't tend to panic or worry).

As regards, "our actions can be somehow informed as if we have free will" and if I understand what you mean correctly, then how about I say, "we act as if we had free will" (in that rightly or wrongly, we believe we have it)? How does that sound?

And the response, of course, is that we could use similar locutions to point out that the serial rapist and murderer's actions can be informed by our blame and retribution.

Yes, the rapist is one system (by and large, or at least can be considered as separate for some purposes) and that system is causally affected, at least to some extent, by other systems in its environment (eg other humans). And also by it's own inner workings (eg its genetics). With both (inner and outer influences/causes) interacting. And possibly under some conceptions all being part of extended, overlapping systems, and ultimately one huge one (the universe).

Thus, while he may not have free will, it may be beneficial to behave as if he does.

I'm still not exactly sure what you mean here.

Let me try to answer. I am not sure if it is beneficial, either for him or for humans generally. It may be and it may not be. That is in any case arguably a secondary consideration (secondary to the issue in the first place of whether free will exists).

What I would say is that it seems very, very difficult to wholly and consistently disbelieve in free will, perhaps especially one's own. That's why I say it may be more of an intractable issue than some of the other challenges I mentioned.

The only caveat I would add to that is that it (free will skepticism or even disbelief) may be especially difficult for the 'western' mind, and perhaps especially the American one, for what may partly be cultural reasons. It might be that the USA is The Land Of The Free in some quite profound psychological ways. :)
 
Last edited:
In other words, “He is not a bad person, he just did a bad thing, and I would have done it if I had literally been in his shoes”.

Maybe I was wrong here, in the bolded part. I was trying to avoid negative subjective moralities. It might be better or more accurate to say that he (the rapist/murderer) is a bad person, especially if we avoid subjective morality as far as possible and just say 'harmful' or 'flawed' person. Essentially, a faulty or broken machine, in terms of one that causes harm to other machines.

As such, we can see that in a subsystem (of the universe) such as earth for example, other, smaller subsystems or earth-machines (eg humans) might decide to terminate the faulty machine, or merely (if it can be afforded) put it somewhere it can't harm others, or possibly even try to fix it. A retributive urge would not, hypothetically, be a necessary part of that (just as it would not necessarily be a part of a decision to scrap or fix a computer).

I am not saying that a world without retributive urges would necessarily be a better or worse place, by the way, because that's a very, very difficult question and our retributive urges (including my own) seem to be deep-seated. You might say that I tend to believe it would be a better place, I suppose, but maybe that's just wishful thinking.

ETA: it's worth noting that a number of psych studies suggest that a stronger sense of or belief in free will is positively correlated with a stronger retributive urge, and a weaker belief in free will is associated with a lesser retributive urge.

This is broadly in line with other, related forms of 'leniency', such as are represented in 'diminished responsibility' clauses in the laws and courts of certain countries nowadays, even though those are mostly informed by folk psychology and are only exceptions in legal systems that otherwise still maintain, represent and enact a fairly strong belief in free will (partly because of perceived pragmatism). If advances in neuroscience, genetics and other fields make it increasingly difficult to maintain such a paradigm, then some changes or adjustments may take place.
 
Last edited:
Thus, while he may not have free will, it may be beneficial to behave as if he does.

I might agree that it is (or at least seems) easier to think and behave as if I (and other human agents around me) do. Perhaps that's what you meant?

The aforementioned psych studies might suggest not beneficial, in some ways at least. In other ways, it would seem to be useful and thus beneficial. As a coping strategy, the illusion (if that's what it is) might be very useful indeed. Such illusions tend to be. There was a time when believing in god was for nearly everyone a useful thing, although probably an illusion, and especially when nearly everyone else believed too (and proscribed disbelief). To some, it still is a useful thing, in fact, both psychologically/biologically and socially.

I would want to reiterate that consequences (or indeed imagined future consequences) are of course secondary considerations, even if relevant.
 
Last edited:
Another momentus realisation/discovery/belief, momentus for at least some humans at the time, was that earth is not only not the centre of the universe, but is an insignificant speck almost literally of no actual importance at all.

That one has since been confirmed by much evidence and observation, a bit like the 'ape descended from an ape' one. But not like the 'no god' one, to which it is tangentally related. As to free will (and indeed self) neuroscience and genetics are starting to make inroads and it is mostly from there, and not from pure (ie unempirical) philosophy, that the challenges to our ideas of free will (and self) have mostly been coming lately.

They are arguably all examples of (or in the case of a lack of free will a potential example of) 'big shocks/blows to the system'.

What has tended to happen, as we know, is that if it becomes necessary or if not acknowledging becomes difficult, humans tend to recover from the 'bad news' in each case and that life goes on, in some ways more or less as it did before.

This is why in the OP I mentioned more down-to-earth explanations for the behavior in humans, not philosophical, but root cause explanations. Primarily I was pointing to the brain and that the brain is what it is, and we don't have a conscious choice over that, though it might be comforting to think we do, an aspect of behavior once again dictated by brain construction, just more survival behavior that has been selected for.
 
This is why in the OP I mentioned more down-to-earth explanations for the behavior in humans, not philosophical, but root cause explanations. Primarily I was pointing to the brain and that the brain is what it is, and we don't have a conscious choice over that, though it might be comforting to think we do, an aspect of behavior once again dictated by brain construction, just more survival behavior that has been selected for.

Yeah.

Also, I think there is something to the suggestion that human knowledge (and possibly other, social changes) have, perhaps especially in recent centuries, generally been happening faster than a lot of natural selection, and thus faster than we can easily adjust to them.

Which makes me want to ask, briefly and tangentally, when did humans first start thinking they had free will? I guess no one knows, but it's likely to be at least several thousand years ago (first written accounts) or possibly (for all I know) much earlier.

My guess is that the idea of free will came along not long after and possibly in conjunction with the development of the sense of self, with self as the perceived agent, and/or when humans felt that they could quite successfully predict the intentions, and thus the actions, of other creatures, most notably perhaps other humans, but also predators, and possibly even things like wind and rain, from where it's a smallish step to gods, one of which, an anthropomorphic one (the Judean one) is eventually attributed with giving us some of what they have (free will).
 
Last edited:
This is why in the OP I mentioned more down-to-earth explanations for the behavior in humans, not philosophical, but root cause explanations. Primarily I was pointing to the brain and that the brain is what it is, and we don't have a conscious choice over that, though it might be comforting to think we do, an aspect of behavior once again dictated by brain construction, just more survival behavior that has been selected for.

Yeah.

Also, I think there is something to the suggestion that human knowledge (and possibly other, social changes) have, perhaps especially in recent centuries, generally been happening faster than a lot of natural selection, and thus faster than we can easily adjust to them.

Which makes me want to ask, briefly and tangentally, when did humans first start thinking they had free will? I guess no one knows, but it's likely to be at least several thousand years ago (first written accounts) or possibly (for all I know) much earlier.

My guess is that the idea of free will came along not long after and possibly in conjunction with the development of the sense of self, with self as the perceived agent, and/or when humans felt that they could quite successfully predict the intentions, and thus the actions, of other creatures, most notably perhaps other humans, but also predators, and possibly even things like wind and rain, from where it's a smallish step to gods, one of which, an anthropomorphic one (the Judean one) is eventually attributed with giving us some of what they have (free will).

Right, and also that this "self" has control. Every organism I've ever observed behaved similarly, like it has control. Trees seem to control quite nicely how they grow, how fast, when to put on new leaves, when to flower, to attract pollinators, they do a great job of controlling themselves wrt their environment. Same for a virus and every organism in our guts. And humans do really dumb things, indicating they don't have very good control.

It probably comes down to degrees of self awareness. At least by humans standards we can be pretty certain that a tree is not self aware. Maybe a millipede is self aware to a degree. Birds that are picky about their mates having the proper plumage and mating dance might be more self aware. But how self aware is a human that eats itself into obesity and poor health?

Claims of free will are probably more attempts at self comfort via claiming control. It's a way to explain and make the bad things go away.
 
Right, and also that this "self" has control. Every organism I've ever observed behaved similarly, like it has control. Trees seem to control quite nicely how they grow, how fast, when to put on new leaves, when to flower, to attract pollinators, they do a great job of controlling themselves wrt their environment. Same for a virus and every organism in our guts. And humans do really dumb things, indicating they don't have very good control.

It probably comes down to degrees of self awareness. At least by humans standards we can be pretty certain that a tree is not self aware. Maybe a millipede is self aware to a degree. Birds that are picky about their mates having the proper plumage and mating dance might be more self aware. But how self aware is a human that eats itself into obesity and poor health?

Claims of free will are probably more attempts at self comfort via claiming control. It's a way to explain and make the bad things go away.

A tree may be extremely unlikely to be self-aware (it even seems very unlikely that dogs are) but as you say, go back far enough and it's possible that earlier humans attributed both awareness and agency to...trees, for instance, and may sometimes still do it, during our more superstitious moments (when the branches tap tap against the window pane of our bedroom in the middle of the night). So I generally agree with you.

To add something, when we wake up in the morning, this 'thingy' pops into existence, usually quite fast, although it may sometimes take a few seconds or more, and it feels like it's inside our head, just behind our eyes, and looking out, and we can 'hear' it 'talking', in fact...'it' appears to be 'us' talking!

So, the big question is, what is this thing and what does it do? It feels completely natural to call it 'me' and ascribe it agency and the ability to freely choose to do stuff, but that may be a big mistake, or even two big mistakes.
 
I must admit that I don't know what the "thingy" is you are experiencing. Unless life is presently quite stressful my morning moments are blissful.

Maybe you mean that it sometimes seems like we are more than one person. I can identify with that but it's easy enough to reconcile the two people as one person with limitations. When we drive our cars they eventually wear out and begin to act differently compared to when they were better maintained or younger. But it's still the same vehicle, obviously, it hasn't been possessed, no magic is involved. But it's fun making movies for some people about how cars are their own sentient entities or possessed.
 
ruby sparks said:
Added to which, if I think that he could not have done otherwise, then there is less reason to attach personal blame or want retribution.
If he had not been able to do otherwise in the usual sense of the words, yes. But that is not the case: Suppose the defense attorney of the serial rapist and killer says that the universe is either determinitic or it has some randomness, but either way, he could not have done otherwise, and so he is not guilty. Do you think that in the usual sense of the words, he has created reasonable doubt, so that jurors ought to acquit?

ruby sparks said:
The only caveat I would add to that is that it (free will skepticism or even disbelief) may be especially difficult for the 'western' mind, and perhaps especially the American one, for what may partly be cultural reasons.
It's just as difficult elsewhere. In other countries, they might not use the English expression 'free will', and they might not even believe in whatever is usually mistranslated as 'free will', in case of mistranslations. But they believe we act of our own accord/free will (e.g., 'por su propia voluntad' in Spanish-speaking places), and they believe in retribution. Just take a look at the history of humankind, or the present. Retribution is everywhere. And so is the distinction between people who act of their own free will, and those who do not.

ruby sparks said:
Maybe I was wrong here, in the bolded part. I was trying to avoid negative subjective moralities. It might be better or more accurate to say that he (the rapist/murderer) is a bad person, especially if we avoid subjective morality as far as possible and just say 'harmful' or 'flawed' person. Essentially, a faulty or broken machine, in terms of one that causes harm to other machines.
I do not know what you mean by "subjective morality" here, but I do know that 'bad person' does not mean 'harmful' or 'flawed'. It means a specific kind of flaw: a moral flaw. He is an evil person. An immoral person. A wicked person. And so on.

For example, an insane man who has lost touch with reality, cannot reason and goes around killing people, is surely harmful and flawed. A broken machine if you like. But he is not a bad person. He is not at fault. He is not to blame. And so on. So, the meaning of those expressions is not the same. We have different words for different sorts of broken machines. Some of those deserve punishment, and others do not.

ruby sparks said:
I am not saying that a world without retributive urges would necessarily be a better or worse place, by the way, because that's a very, very difficult question and our retributive urges (including my own) seem to be deep-seated. You might say that I tend to believe it would be a better place, I suppose, but maybe that's just wishful thinking.
I would be a world without humans, chimps, bonobos, gorillas, capuchin monkeys, etc. Retribution and morality come from millions of years of evolution.

ruby sparks said:
ETA: it's worth noting that a number of psych studies suggest that a stronger sense of or belief in free will is positively correlated with a stronger retributive urge, and a weaker belief in free will is associated with a lesser retributive urge.
I'm not saying the human system of moral retribution cannot be damaged by a belief that we cannot act of our own free will. But it almost certainly cannot be destroyed in the majority of the population by that method - studies still show retribution all around (even if somewhat diminished), and frankly short of a much more massive damage to the brain (like with a surgery), it's extremely unlikely that something so central could be taken out. I'm not even sure it can be destroyed in any human by that method.

At any rate, it is a part of the human mental machinery. If you manage to take it out, you get a broken machine.

ruby sparks said:
As a coping strategy, the illusion (if that's what it is) might be very useful indeed. Such illusions tend to be.
But why do you think we have an illusion?
I believe I act of my own accord, because the evidence is decisive. For example, I choose to move my mouse, and I do it. No one forced me. I was not compelled like a kleptomaniac, either, etc. The universe might be deterministic, but I am not under the illusion that the universe is indeterministic. I do not know whether it is. I take no stance, but I see no reason to believe I'm not acting of my own free will, regardless of whether under the same exact circumstances, particles and all, in the whole universe, it was necessary that I would act as I did.

For that matter, I have the power to move my mouse. I can see I do it, and the evidence is decisive. Now someone might say that perhaps I don't have that power, and some powerful entity is moving the mouse when I intend to, in order to mess with my head. But that seems extremely improbable. Similarly, the free will illusion idea seems extremely improbable.
 
For that matter, I have the power to move my mouse. I can see I do it, and the evidence is decisive. Now someone might say that perhaps I don't have that power, and some powerful entity is moving the mouse when I intend to, in order to mess with my head. But that seems extremely improbable. Similarly, the free will illusion idea seems extremely improbable.

From whence intention? From you or from machine finding reason for forming intention? Since we don't act on the world without becoming aware of the world - even there what is the purpose of acting if we don't know - we react. Intention is derivative.

If I had to guess your arguments are too much about you.
 
For that matter, I have the power to move my mouse. I can see I do it, and the evidence is decisive. Now someone might say that perhaps I don't have that power, and some powerful entity is moving the mouse when I intend to, in order to mess with my head. But that seems extremely improbable. Similarly, the free will illusion idea seems extremely improbable.

From whence intention? From you or from machine finding reason for forming intention? Since we don't act on the world without becoming aware of the world - even there what is the purpose of acting if we don't know - we react. Intention is derivative.

If I had to guess your arguments are too much about you.

I don't understand your post. Could you clarify your objection, please?
 
Suppose the defense attorney of the serial rapist and killer says that the universe is either determinitic or it has some randomness, but either way, he could not have done otherwise, and so he is not guilty. Do you think that in the usual sense of the words, he has created reasonable doubt, so that jurors ought to acquit?

Yes, I think there is reasonable doubt, from philosophy, neuroscience and genetics mainly, that the rapist or any rapist could not have freely chosen to do otherwise.

As to acquittal.....

Imagine the court case taking place in and among a society made up of and run by sophisticated thinking/reasoning robots that don't have free will but are nonetheless capable of learning and making decisions. The robots who decided what to do with the serial raping robot would probably not acquit it, for a variety of reasons. For instance, the faulty (raping) robot, if freed, could easily rape again. And/or, since the society is made up of thinking robots with learning capacities, jailing raping robots would be a causal deterrent to other potentially-raping robots.
 
Last edited:
For that matter, I have the power to move my mouse. I can see I do it, and the evidence is decisive. Now someone might say that perhaps I don't have that power, and some powerful entity is moving the mouse when I intend to, in order to mess with my head. But that seems extremely improbable. Similarly, the free will illusion idea seems extremely improbable.

From whence intention? From you or from machine finding reason for forming intention? Since we don't act on the world without becoming aware of the world - even there what is the purpose of acting if we don't know - we react. Intention is derivative.

If I had to guess your arguments are too much about you.

I don't understand your post. Could you clarify your objection, please?

You dodge by presuming you have power. Starting there never leads to evidence. You admit that by resorting to 'seems extremely improbable' not a permitted evidentiary remark. That you can do anything is proof of nothing. You are observing you doing. So what? If you want to talk about introspection I'm your chickadee. Not evidence. Evidence requires independent observations note the (s).
 
fromderinside said:
You dodge by presuming you have power. Starting there never leads to evidence. You admit that by resorting to 'seems extremely improbable' not a permitted evidentiary remark. That you can do anything is proof of nothing. You are observing you doing. So what? If you want to talk about introspection I'm your chickadee. Not evidence. Evidence requires independent observations note the (s).
First, I do not dodge.
Second, I do not presume to have power. On the basis of the information available to me, it is beyond a reasonable doubt that I have the power to move the mouse on my desk. I was using that as an analogy, since I did not expect anyone would actually suggest I do not even have that power.
Third, I do not "admit" anything, and "seems extremely improbable" is indeed a rational probabilistic assessment.
 
ruby spark said:
As to acquittal.....

Imagine the court case taking place in and among a society made up of and run by sophisticated thinking/reasoning robots that don't have free will but are nonetheless capable of learning and making decisions. The robots who decided what to do with the serial raping robot would probably not acquit it, for a variety of reasons. For instance, the faulty (raping) robot, if freed, could easily rape again. And/or, since the society is made up of thinking robots with learning capacities, jailing raping robots would be a causal deterrent to other potentially-raping robots.
I'm having difficulty imagining the scenario. The robots think and reason, and can learn and make decisions. It seems to me they can act of their own accord, in other words of their own free will. If I were to assume otherwise, I would be implying that for some mysterious reason, then (and similarly us) cannot act of their own accord. But I would find it difficult to understand what makes it the case that our/their actions are not of their/our own accord, as they obviously appear to be.

Essentially, the scenario seems similar to 'imagine that we humans cannot act of our own accord, even though appearances are as they are in the real world', and I have trouble imagining how that might happen.

Still, I think I get your point: imprisoning people would be justified on grounds of danger, or deterrence. The problem is that said grounds are, on their own, unjust. I don't rule out that, when deciding how to allocate limited resources, the people in the government who make such choices should consider, in addition to the main goal of punishing the guilty as they deserve, secondary goals such as preventing dangerous people from causing more harm to those who do not deserve it, or deterring other bad people from engaging in similar acts, as long as the guilty are not punished more than they deserve. Yet, in this scenario, those secondary goals would be primary, and the system would be allowing (not when it malfunctions because they judge or jurors make a mistake, but as a matter of design) the punishment of people without conclusive evidence that they did something for which they deserve to be so punished.

Consider this scenario: In our human society, the defense attorney has not brought up the matter of determinism to the consideration of the jurors, and they are not thinking about that. However, the pieces of evidence the prosecutor presented are not bad, but not strong enough to establish beyond a reasonable doubt that the defendant is the author of the rapes and murders attributed to him. In fact, rationally, the jurors reckon that it is very probable (say, between 0.8 and 0.9 to give it a number) that he is the perpetrator of all of them, but there is a non-negligible chance that someone else did, and he did not rape or kill anyone. In other words, he is very probably guilty, but not probable enough to make it beyond a reasonable doubt.

Yet, the considerations that you mention remain: why not convict, with 0.8-0.9 chances that he did it? Surely, that would work as deterrence. And - on the basis of the available information - the probability of more people getting similarly raped and murder goes down is he is imprisoned or executed. Why not do it, then? I would say it's because it would be unjust to do it, because there is a reasonable chance that he might not deserve the punishment. But then, the same applies to the robots (or humans, since there seems to be no difference) scenario.
 
ruby spark said:
As to acquittal.....

Imagine the court case taking place in and among a society made up of and run by sophisticated thinking/reasoning robots that don't have free will but are nonetheless capable of learning and making decisions. The robots who decided what to do with the serial raping robot would probably not acquit it, for a variety of reasons. For instance, the faulty (raping) robot, if freed, could easily rape again. And/or, since the society is made up of thinking robots with learning capacities, jailing raping robots would be a causal deterrent to other potentially-raping robots.
I'm having difficulty imagining the scenario. The robots think and reason, and can learn and make decisions. It seems to me they can act of their own accord, in other words of their own free will.

I suggest that you can imagine what she's imagining if you ask yourself whether robots have free will now. Do neural nets, which learn and get smarter, have free will?

If your reaction is something like, "Well, no, they don't have free will; they're just robots. And the neural nets may act like they're thinking, but we know what's actually happening, so we know they really aren't," then you probably get it.

Ruby may believe that future robots--robots capable of passing the Turing test--will be the result of well-understood incremental improvements over tech that we have now. Tech that does not have free will.

Ruby doesn't foresee any magic moment in the development of robotics that will remind us of this:

importSoul.png

And, absent such a moment, robot brains will grow in complexity, but the differences will always be of degree rather than of kind.

If ever we learn to make robots that think like humans, it will be because we've figured out how humans think--and it will have turned out that we just a kind of robot ourselves.

Note to Ruby: I apologize if I have misrepresented you in any way.







...

Still, I think I get your point: imprisoning people would be justified on grounds of danger, or deterrence. The problem is that said grounds are, on their own, unjust.

The four reasons for punishment are rehabilitation, isolation, deterrence, and vengeance. If you rule those out as unjust, you leave no justification for punishment at all.




I don't rule out that, when deciding how to allocate limited resources, the people in the government who make such choices should consider, in addition to the main goal of punishing the guilty as they deserve,

What's that, poetic justice? That's not a main goal. I don't think it's any goal at all. What would be the point?

If punishment isn't justified as an attempt at rehabilitation, isolation, or deterrence, then it isn't justified at all.




secondary goals such as preventing dangerous people from causing more harm to those who do not deserve it, or deterring other bad people from engaging in similar acts, as long as the guilty are not punished more than they deserve. Yet, in this scenario, those secondary goals would be primary, and the system would be allowing (not when it malfunctions because they judge or jurors make a mistake, but as a matter of design) the punishment of people without conclusive evidence that they did something for which they deserve to be so punished.

Can you offer an example of a punishment that is deserved for a reason other than rehabilitation, isolation, or deterrence?
 
Back
Top Bottom