• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Great Contradiction

Please yourself, of course; but you misstated my argument.

(* I said "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." Your attempted paraphrase, "rejecting retributive punishment", is not synonymous with "rejecting the retributive notion of moral desert".)
Yes, that was quite a blunder. Apologies for the misrepresentation.
You're mocking me, aren't you?[/buzzlightyear]

Yes, it was quite a blunder. Take four suspects:

Adam is innocent and punishing him won't be an effective deterrent.
Bonnie is innocent but punishing her will be an effective deterrent.
Cindy is guilty but punishing her won't be an effective deterrent.
Dennis is guilty and punishing him will be an effective deterrent.

"Rejecting retributive punishment" means not punishing Cindy.

"Rejecting the retributive notion of moral desert" means treating Cindy like Adam and treating Bonnie like Dennis.

When you decide to punish Dennis only, you're rejecting retributive punishment, but you're not rejecting the retributive notion of moral desert: you're using the retributive notion of moral desert, in order to distinguish between Dennis and Bonnie.

I wrote "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." because treating Cindy like Adam and Bonnie like Dennis fails at the task of not hurting the innocent because they don't deserve it.

When you misstated my argument as "rejecting retributive punishment necessarily means one rejects the notion of not hurting the innocent because they don't deserve it", you implied I'm claiming that not punishing Cindy fails at the task of not hurting the innocent because they don't deserve it. I'm not claiming that. It isn't treating Cindy like Adam that fails at the task; it's treating Bonnie like Dennis that fails at it.

Since you evidently propose to punish Dennis only, you evidently have a punishment algorithm that works like the Roman Republic: decisions are made by two consuls who have to agree before an action may be taken. When the consuls disagree, "No" takes precedence over "Yes". And you're in effect labeling this system of government, "Consul Deterius is in charge, Consul Retribulus has no authority".
 
Please yourself, of course; but you misstated my argument.

(* I said "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." Your attempted paraphrase, "rejecting retributive punishment", is not synonymous with "rejecting the retributive notion of moral desert".)
Yes, that was quite a blunder. Apologies for the misrepresentation.
You're mocking me, aren't you?[/buzzlightyear]
:D

Yes, it was quite a blunder. Take four suspects:

Adam is innocent and punishing him won't be an effective deterrent.
Bonnie is innocent but punishing her will be an effective deterrent.
Cindy is guilty but punishing her won't be an effective deterrent.
Dennis is guilty and punishing him will be an effective deterrent.

"Rejecting retributive punishment" means not punishing Cindy.

"Rejecting the retributive notion of moral desert" means treating Cindy like Adam and treating Bonnie like Dennis.

When you decide to punish Dennis only, you're rejecting retributive punishment, but you're not rejecting the retributive notion of moral desert: you're using the retributive notion of moral desert, in order to distinguish between Dennis and Bonnie.

I wrote "To reject the retributive notion of moral desert is simultaneously to reject the notion of not hurting the innocent because they don't deserve it." because treating Cindy like Adam and Bonnie like Dennis fails at the task of not hurting the innocent because they don't deserve it.

When you misstated my argument as "rejecting retributive punishment necessarily means one rejects the notion of not hurting the innocent because they don't deserve it", you implied I'm claiming that not punishing Cindy fails at the task of not hurting the innocent because they don't deserve it. I'm not claiming that. It isn't treating Cindy like Adam that fails at the task; it's treating Bonnie like Dennis that fails at it.

Since you evidently propose to punish Dennis only, you evidently have a punishment algorithm that works like the Roman Republic: decisions are made by two consuls who have to agree before an action may be taken. When the consuls disagree, "No" takes precedence over "Yes". And you're in effect labeling this system of government, "Consul Deterius is in charge, Consul Retribulus has no authority".
Sorry, life's too short.
 
I am just getting around to replying now....

Now, if by 'a prior determination' you mean a previous cause, in the usual sense of the words.....

Yes,I think that is how I mean it.

... then it is of course false. Some of the causes of my deciding to write this - of my own free will - are my thinking about the matter and my desire to defend the correct position. Those, however, are not constraints, in any relevant sense of the word 'constraint'. Specifically, those causes do not restrict the freedom of my choice, and are instead part of the process by which I make said free choice.

What I would say is that they are part of the process of you making the choice. It's not free. I can't see any way that it actually is, given that it is apparently governed by previous causes at every instant. So what I am saying is not at all obviously false to me.
 
If by that you mean all of the causes, of course I am not. It would not be possible (to many, since the Big Bang)......

You don't need to identify all the individual determinants to accept (if you do) that they add up to 'full determinism'. That's all you would need to accept. We're temporaily discounting randomness, of course.

... and in any event, not relevant.

Not sure why you would say that. Seems totally relevant to me.


I did not try to think anything at all by then. Rather, I freely chose to read his post, and 'cola' was part of a sentence, but I read the whole sentence ' If I said cola you'd probably say Coke or Pepsi.', and after that, of course I had by then thought of Coke and Pepsi, because they were words in the sentence I had just read. Then I decided to look up a cola, in order to give a reply to his post that was not what he predicted. It wasn't the first that came to mind, either.

So, what happened was nothing. I read the sentence to fast to start thinking of brands mid-sentence.

However, I think I can give an answer of the sort you are looking for if I move from 'cola' to 'car', which he also said. The first name I thought of was 'Tesla', probably because I had read an article about Tesla (now that I think of it, it was a post mentioning an article). Now, I did not choose to think 'Tesla' of my own free will. I chose of my own free will to read his sentence. One of the many effects was that I thought 'Tesla'. That was not a free choice on my part. Nor was it a coerced choice. Rather, it was not a choice at all. It just happened. Then, I chose - of my own free will - to look for other names, because I intended to reply as I did: namely, with a name that I had never heard before, in order to address his point that I would say some brand I was conditioned to say.

So, when I decided to look for a name I never heard before, so I was thinking for a second 'what can I look for?', and one of the things that for some reason popped into my head was 'look up chinese electric car makers'. Why? I do not know. That part was not my choice. Then I did decide it would work (i.e., it would give me what I wanted, a name of a car that I hadn't heard before), so I looked that up. The first one on the Wikipedia list that I had not heard before was Dongfeng. Since it did what I wanted I decided of my own free will to post that name.

In short, our mental life involves both free choices and things that are not free choices but just happen. Those do not make our choices less free, or unfree. That's all over the place.

When I chose (freely) to, say, try to solve a difficult (for me at least) math problem, I deliberately choose for example to think about the matter, to dedicate time to it, etc., but I expect that my unconscious thought processing will yield the results of the computations into my conscious mind - and they do. Again, our thought processes involves free choices and unconscious processing all the time, intertwined. But this is not a problem for freedom.

Whoa. :)

I only asked you to consider what happened when you read the word 'cola' (or 'car' if you prefer, I don't think it makes any difference what the word is).

As far as I can see, you correctly identified that you did not choose to think 'Tesla'. You say it 'just happened' rather than it being a free choice, but that it was not a coerced choice.

Actually, I would disagree on that. First, it did not 'just happen'. It was not random. Second, I would say that the prior causation (your brain seeing the word 'car') was functionally equivalent to a coercion, because it is essentially a force, and in this case one that restricted you (for a host of reasons that pertain to your brain, and in that instant) to thinking 'tesla'. In other words it was a causal determinant, for you, in that instant (it would not have been for someone unaware of what a tesla is).

We could quibble about the word 'coercion' there but I think it would be missing the point, given that the function and effect is the same, namely that 'car', for you, automatically produced 'tesla'.

So, I am going to skip past that (what I see as a) pointless word quibble and just call the move from 'car' to 'tesla' the 'domino effect'.

Now, so far, so good, but what you did after that, it seems to me, despite me asking you to pause at that point, was jump (or slide) straight to an unwarranted assumption that at some point thereafter, something different happened, that at some point, your brain managed to escape the process of falling dominoes, of which 'car' to 'tesla' is just the first one.

Please explain how it's not all falling dominoes.
 
Last edited:
Let me quote Thomas McCormack in The Fiction Editor:

Just roll past the here-undefined word "cluster." Defining that wouldn't help us with our discussion.

--begin McCormack quote--

The cluster twinkles like a Disney Christmas with
familiar, genial-seeming terms. But in fact it's a
source of confusion, frustration, misconception,
and miscarriage.

The confusion is betrayed initially by the cluster's
disordered vocabulary. Ever since Aristotle first
groped into its occulting gloom, commentators
have failed to agree on a consistent lexicon:
Philosophers, teachers, critics, and writers of
how-to-write books have reinlessly used words
like 'plot', 'story', 'structure', 'situation', 'theme',
'premise', 'proposition', 'crisis', 'catharsis', 'resolution'--
and on through an attic of jumbled and overlapping
terminology.

Underlying this verbal pandemonium is, predictably,
conceptual chaos: The words arise from idea that are
blurred and rimless.

This comes perilously close to saying that something
essential to discuss is essentially undiscussable, but
I have to bear-dance into it anyway because it's a
crucial area that editors often mention, rarely think
through, and never adequately understand, and
my assignment is to make the case that something
can be done about this.

--end McCormack quote--

McCormack is talking about literary terminology, but his point is true in many other areas. It's amazing that we can communicate at all.

Thought-provoking. Thanks for posting.

As far as I can see, retribution -- as distinct from rehabilitation, isolation, and deterrence -- is mere cruelty, wearing a self-righteous mask.

What did you think of what I said to The AntiChris on this, here:

If, for example, what is being said here is that the sense of justice having been done is, of itself useful (psychologically for the wronged person and by extension for the society with which that person interacts) then that might be neither villainous or chilling. It might be flawed, if for example it is based on an incorrect assessment of whether the wrongdoer had free will or not, and there might potentially be a better approach if beliefs in free will weakened or disappeared, but in the meantime it still might function pragmatically in a useful or good, albeit imperfect way.
 
However, my 'conscious now' did not happen in the past.

All the events that are in (are the content of) your conscious now, already happened in the past, it would seem, that is the point.

And I don't just mean events in the world outside your head, I mean anything, including the internal products of your own brain, though it is, I think, initially at least, easier to think of the former, the obvious things, what I might call the 'inputs', or perhaps 'external causes' (since these inputs seem to be causal to what happens next).

I gave the example of seeing a face. It could be a pin prick instead. Or even reading the word 'car'.

Point being, any action or reaction you consciously experience or report deciding on has already happened.

Note that consciously intending in advance to do something that hasn’t happened yet (eg ‘I will drive to the shops later’) is just a mental plan of hypothetical action, something that remains a possibility, not an actual enacting decision itself.

Nor is the having of that conscious ‘forward intention’ freely willed anyway. It will be causally determined, just like everything else, it seems.

Now, none of the above matters if free will can be exercised non-consciously. That’s something that hasn’t been unpacked yet.
 
Last edited:
ruby sparks said:
It seems to me you are assuming the word 'identical' and that it needs to be included. Otherwise, your P2 does not work, as I see it.
I do not understand why it looks like that to you, given that I had just explained (in the very post you were replying to) that I was not making that assumption at all. If you say P2 does not work with the assumption, then fine, we have a disagreement: I say it is true, you say it is false. No problem, we may discuss a disagreement. But since P2 is explaining what I mean by the words I am using, if you intend to discuss what I'm actually saying, you may not change it for something else. No, P2 does not have that assumption at all. It is simply not what I am saying.

ruby sparks said:
To try to demonstrate, consider the following as regards P2: if you provide the same inputs to two machines, but the two machines are only very similar but not identical, the outputs will not be the same.
That is false. For example, you have a computer, and I have a computer. They are similar, but surely not identical. I am typing on my keyboard, and making text documents with a program named xed. Suppose you install xed (first install Linux if you are running another OS). Suppose then that you type "The cat is on the mat", and then save the document. Suppose I also type "The cat is on the mat" and save the document. Suppose then that we open the documents, and copy and paste and then post it here. What happens is that as long as both your computer (including software and hardware) and my computer are functioning properly, the outcome is the same.

But now suppose they are not running the same program. I keep using xed, and you use whatever it is you use, which very probably is not xed. You can still produce a text document with the same content. That is the outcome that matters. You do not need an exact particle configuration. What you need is that both systems say 'morally wrong' when given the same description of a behavior, or 'X is more immoral than Y', and so on.

As my example in the previous post shows, this is common with humans. I do not know why the bear example did not persuade you. But let me try again:
Suppose that Alice and Bob enter a cave, take a look, and see there is not more than one exit. Then they leave the cave, and they see a bear get in the cave. As they continue to look at the cave's entrance, they see another bear get in. A while later, they see one bear get out. They do not see any other bears. Now, Alice and Bob are not identical humans. But at this point, either Alice and Bob both believe there is exactly one bear in the cave (maybe a dead one, but one), or else something in Alice's brain or in Bob's brain is malfunctioning. It would not be the moral sense, but rather, whatever mechanism counts objects, or whatever makes intuitive probabilistic assessments, or some other system. But something needs to be malfunctioning.

In other words, the systems of two different humans who witness the bears as described will yield the same verdict: there is exactly one bear in the cave. Any other verdict would involve a malfunctioning of at least one of the brains.

ruby sparks said:
'Properly functioning' does not seem to cover it. Two very similar but slightly different machines (which I am suggesting is what human brains are) could both be properly functioning. In short, 'properly functioning', I'm suggesting, has wiggle room. Identical does not.
No, it does cover it. Two very similar machines give the same output in many conditions, barring malfunctioning. What happens is that, as Alice and Bob are not identical, their brains will not yield the same verdicts about everything. For example, maybe Bob finds Mary sexually attractive, but Alice does not find Mary sexually attractive. That of course does not need to imply any malfunctioning. On the other hand, they yield the same verdict in the bear-counting case, and in many, many others - save for malfunctioning. P2 is the hypothesis that this phenomena that holds for many, many subsystems, in humans, and in computers, and in many other systems also holds for the moral sense.

There is still some extra wiggle room in the edges so to speak, because I'm the one stipulating the conditions and you asked me what I meant, and what I mean is P2 with some that wiggle room, which is meant to account for the vagueness of terms resulting from very slight differences. But those would be minimum (there are people would not make that room, and would assert P2 without any exceptions for vagueness, but you asked me).



ruby sparks said:
Unless you are assuming the conclusion at the outset, that there is a universal morality. In that case, yes, only one of the machines would be functioning 'properly' (in line with the supposed universal morality). But we don't know if it's the case, and that's a problem. It could just be that the two machines differ. End of. Personally, I suspect that this is the case, partly because I think morality is a mental construct that does not exist in the natural world. And if I were to temporarily use your approach back at you, I might ask you to demonstrate to me that that is false.
Well, I reckon that there is universal human morality. But let me point out that nearly everyone believes that, not as an assumption any more than they 'assume' that other people also have minds. It's part of the normal human experience and normal human beliefs. You would need some argument to challenge it and knock it out, not to establish it - people are justified in believing others have minds too, without having to have an argument for it; the same goes for universal human morality.

I would say it is not my burden to demonstrate that that is false. It would be yours to demonstrate your claims. For example, you claim that morality does not exist in the natural world. Well, I do not know how you distinguish between the natural world and the rest of the world. But it is your claim and you should establish it. At any rate, experiments with monkeys and apes of other species show that they too make judgements of fairness, rule breaking, desert, they punish, and the like. So, this is not something humans invented out of thin air - but again, this would not be up to me to show, as it is the ordinary, default human position.

Still, I will give it a try if you want me to. Suppose that there is no universal moral sense. So, different people just make different moral assessments because they have different systems. Then what we have is either a system that works in a culture-relative manner, or a system that works at an individual level. What do I mean by this? I will present the scenarios below, with additional premises to give it at least some universality. Let us compare three options.

Scenario 1. Universal human moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Scenario 2. Culture-relative moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.1: If two humans have properly functioning moral senses and the two humans are members of the same group, they (i.e., the moral senses) will yield the same outputs given the same inputs. The groups are instinctively formed depending on social interaction.
P2.1': If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs within some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.


Scenario 3. Individual taste moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.2: If two humans have properly functioning moral senses, they (i.e., the moral senses) might not yield the same outputs given the same inputs, even if the two humans belong to the same culture, with the exception of some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Scenario 4. All other alternatives.
(as before, P2 should be understood allowing some room for vagueness - but not much; still, we should leave that for later) to make it manageable.

The reason for the exception in P.2.1' and is that it holds pretty much everywhere, even when individual verdicts often differ on other matters. For example, suppose Alice reckons tomatoes are tasty, but Bob reckons they taste bad. That may well happen without any malfunctioning in the system by which Alice or Bob assess gustatory taste. On the other hand, suppose in an experiment, they are giving something harmless but which is artificially flavored to taste (to humans) like rotten meat, rotten eggs and horse droppings, combined. :) In this case, either there is agreement that it tastes horribly bad, or else either Alice or Bob (or both) have a malfunctioning system (I'm talking about sincere assessments of taste). So, human variation results in different tastes between humans - but only to a point. Of course, one could remove that hypothesis too, but if I'm going to argue against it later, I'm not going to make it harder for me by including conditions that rule out the most obvious forms of debunking.

Before I go on, I would like to ask you whether you think that Scenario 4 is so improbable that we can rule it out and focus on 1., 2., and 3. I know you already said P1 and P3 are okay, so that's good. I also know you reject P2. But I would like to know if what you had in mind is captured by the disjunction of Scenarios 2 and 3. If not, is there any other alternative you think is worth considering, so I include it as well?
 
We do not perciebe in real time.

First there is the finite speed of light as a fundamental limit. Second, brain biological processing takes time, just like an electronic computer. Thoughts as biological processes rake time.

The present is a general description of being quasi real time. It is a duration of time that moves in time, a moving window.
 
ruby sparks said:
Hm. I'm not sure if it's inconsistent to deploy two tools for the same job:)
It is not always. But it is in some instances, including the ones you did. In other words, you have inconsistenly claimed that humans cannot act of their own free will. It's inconsistent because in some cases, you claimed it was an empirical finding, and insisted that neuroscience undermined free will. In other cases, you instead said or implied that a contradiction follows from the combination of the premise that the universe is deterministic (what you call "fully" determined) and the premise that I am writing these posts on my own free will.

That is an inconsistent way of argument against our ability to act of our own free will, unless you are suggesting that neuroscience provides evidence supporting the hypothesis that the universe is deterministic.

ruby sparks said:
Let me, before I stop procrastinating here in this enjoyable discussion and get on with paid work, ask you a question. Go back to a time before Darwin. How could the question of whether or not we evolved from apes have been answered with logic, definitions and semantics?
Answer: it would not have been. That is an empirical question.
On the other hand, mathematical questions were resolved without any knowledge of evolution. And philophers could point out that there were contradictions in the writings of other philosophers.

ruby sparks said:
And just to note: I retracted my claim that there was a contradiction. I am now (a) having trouble working out how to construct an argument in logic to back up my alternative claim that you are mislabelling (calling the tip of an iceberg an iceberg) and (b) not even yet convinced I am obliged to do that.
Well, you have a burden (an epistemic obligation if you like) to produce the argument as long as you claim there is a contradiction. As you say you retracted that claim, then you don't have one.


ruby sparks said:
It seems to me an empirical fact that you are (at least it's an analogy for what I think you are doing) and yes I am trying my best to use your meaning of words when I say that.
The meaning is not mine. Rather, it should be the ordinary meaning in English of the expression 'of one's own free will', and similar ones. If it is an empirical matter, then I would say the evidence is on your end, because it is part of the ordinary human experience. For example, suppose in a court case, the defendant - who seems to have normally functioning limbs, fingers, etc. - says put a wallet on a table, he does not have the burden to show that he has the strenght to put a wallet on the table. The burden would be on anyone challenging them. And the same would happen if the defendant claimed he went for a walk of his own free will.

At any rate, I actually did provide arguments in support of my claim here, though this is akin to asking me to show that I have the capability to move the mouse on my desk: in both cases, you are the one challenging ordinary experiences; the burden is not on my end.

I know you already replied to that, and then I replied again and addressed your points. But even as the exchanged continued, so far you haven't produced any evidence against the view that I'm writing of my own free will (you talk about neuroscience, but I do not see any good reason to think that the results you mention are a problem for my ability to act of my own free will; note that I do not question the empirical findings, but rather, the anti-free will interpretation of them).
 
We do not perciebe in real time.

First there is the finite speed of light as a fundamental limit. Second, brain biological processing takes time, just like an electronic computer. Thoughts as biological processes rake time.

The present is a general description of being quasi real time. It is a duration of time that moves in time, a moving window.

Regardless, that is not relevant to any of my points.
 
ruby sparks said:
Actually, I would disagree on that. First, it did not 'just happen'. It was not random. Second, I would say that the prior causation (your brain seeing the word 'car') was functionally equivalent to a coercion, because it is essentially a force, and in this case one that restricted you (for a host of reasons that pertain to your brain, and in that instant) to thinking 'tesla'. In other words it was a causal determinant, for you, in that instant (it would not have been for someone unaware of what a tesla is).
By 'just happened' I do not mean it is uncaused, but that it is not something I choose, but something I find myself doing.
That aside, it is not a coercion, as it is not forcing me against my will, but it is something that is a part of the way my thinking process normally operates and does not go against my will. You would have a better case - though for compulsion, rather than coercion - if instead you considered something like a melody or song that keeps popping into my head even if I try to get rid of it. That would be a tiny compulsion, and to that extent, it would slightly reduce my freedom - not because it pops into my head, but because it does so against my will -, but of course, it would not prevent me from acting freely most of the time. In this case, however, I had decided to read his post, and relevant things popping into my head to reply are part of what I expect and actually intend to happen, as I do when trying to solve any problem. I just don't decide each.


ruby sparks said:
We could quibble about the word 'coercion' there but I think it would be missing the point, given that the function and effect is the same, namely that 'car', for you, automatically produced 'tesla'.
Well, after considering the matter, it was more likely not just 'car' but 'car company'. But that's the way my thought processes normally work. I expect (and want) things to happen like that in order to think and figure things out. Look at the math problem example. These things that pop into my head are part of my thinking, and what I chose to do. If I choose to go to the park, I will not consciously choose every muscle or every part of my body I move. My arms will swing on their own, accompanying the movement of my legs, much of which would be also unconsious (say, while I'm thinking about math problems), but that does not mean I did not choose to go to the park.



ruby spark said:
Now, so far, so good, but what you did after that, it seems to me, despite me asking you to pause at that point, was jump (or slide) straight to an unwarranted assumption that at some point thereafter, something different happened, that at some point, your brain managed to escape the process of falling dominoes, of which 'car' to 'tesla' is just the first one.
No, that is not what happened - there was no unwarranted assumption. You got that half-right. Yes, at some point thereafter (or rather, all intertwined), something different happened. But the different thing that happened was not of course escape the process of causation. What happened is that I made choices. And I made them of my own accord. Actually I chose before that. One of my choices was to read his post. I expected things to pop into my head - relevant to the post - and I intended that to happen - as it is part of my regular thought process and allows me to respond as intended - but I did not choose every individual thing that pops into my head. This is analogous to when I choose to walk to the park, but I do not choose every individual movement of my arms, legs or eyes. When I am reading a post, thinking how to reply, etc., I make some choices, and some other things are not choices, though they are part of the thinking process. That is the difference, not whether they are caused or not (all of them are).

ruby sparks said:
Please explain how it's not all falling dominoes.
If by "falling dominoes" you mean causes and effects, it is.
If by "falling dominoes" you mean that there is no choice involved, then that is the difference. Dominoes have no minds, and make no choices.
If by "falling dominoes" you mean that there is no free choice involved, then that is the difference. Dominoes have no minds, and make no choices. In particular, they do not make free choices.

The difference between the things that pop into my head and those that I decide are not about whether they are caused by previous events, but rather, that ones are indeed choices. And the difference between free choices and unfree choices is that the latter involve external coercion or internal compulsion.
 
ruby sparks said:
All the events that are in (are the content of) your conscious now, already happened in the past, it would seem, that is the point.
No, that is false. They have causes in the past. But they did not. When I made a conscious choice, the choice had causes in the past. My conscious choice was not in the past.


ruby sparks said:
Point being, any action or reaction you consciously experience or report deciding on has already happened.
It is happening as I decide it. It has causes in the past. It can be predicted to a better or lesser extent if those causes can be found.

ruby sparks said:
Note that consciously intending in advance to do something that hasn’t happened yet (eg ‘I will drive to the shops later’) is just a mental plan of hypothetical action, something that remains a possibility, not an actual enacting decision itself.
Nope, it is a decision to do it later.


ruby sparks said:
Nor is the having of that conscious ‘forward intention’ freely willed anyway. It will be causally determined, just like everything else, it seems.
The first sentence seems obviously false, for the reasons I've been giving. The second is true (we assume; we actually don't know if it is causally determined of course, but that is not the point).

ruby sparks said:
Now, none of the above matters if free will can be exercised non-consciously. That’s something that hasn’t been unpacked yet.
What matters is whether we can act of our own free will/of our own accord. I would need more information about what you call "free will can be exercised non-consciously" before I can continue (a link would help), but in any case, none of the above matters, in the sense that (full) causation does not prevent freedom.
 
We do not perciebe in real time.

First there is the finite speed of light as a fundamental limit. Second, brain biological processing takes time, just like an electronic computer. Thoughts as biological processes rake time.

The present is a general description of being quasi real time. It is a duration of time that moves in time, a moving window.

Regardless, that is not relevant to any of my points.


Nevertheless, he has made a compelling argument that time happens only in the past, and perception happens only in the future.
 
fromderinside said:
I thought I made it clear that morality and ethics come from two distinct and different systems. One is moral on some scale. If morality is a sense there are tugs and pushes on behavior. It comes with being a human. Ethics is a human created system of rules. One applies ethics on some scale consciously. I'm petty sure being and doing are different things. You'd need one argument P for morals and another argument Q for ethics.
You are using the words in an unusual manner. In the usual sense, in English the terms 'immoral behavior', 'morally wrong behavior', and 'unethical behavior' all mean the same, so I use the same argument.

Not really.

From: https://www.diffen.com/difference/Ethics_vs_Morals

Ethics and morals relate to “right” and “wrong” conduct. While they are sometimes used interchangeably, they are different: ethics refer to rules provided by an external source, e.g., codes of conduct in workplaces or principles in religions. Morals refer to an individual's own principles regarding right and wrong.
 
ruby sparks said:
It seems to me that what some of these ideas are assuming and what, for example, Angra is saying, is that there is a hypothetical 'standard human' (sort of like there may be a standard carpet vacuuming robot in a way). I wonder if the idea of 'proper functioning human' comes from this.

First, we do have a concept of a human being, as we have concepts of say dogs, cats, flashligths, desks, and a lot of other things. We use those concepts to distinguish between different things. A human is not the same as a desk; a fox is not an aspirin, and so on. Is that a 'standard human'? Well, not sure what you mean, but there is human and non-human (though of course the transition is fuzzy due to the fact that our concepts have a tiny degree of vagueness, but that's not a problem for assessing whether the objects around us are human beings or not, in general).

Second, just as we can tell a human apart from a cat, we can (in many cases) tell whether a cat is ill or healthy, whether its organs are functioning properly or not. The same for a human. So, the concept of proper functioning is a regular concept we have, surely grasp intuitively, as you understand the difference between a healthy lung and a sick lung (due to cancer, or a virus, or whatever). Similarly, there is such thing as mental illnesses when the mind is malfunctioning, and also lesser sorts of malfunctioning usually not classified as illnesses. I already gave other examples in other posts (e.g., the bear counting in the cave with one exit).
 
Not really.

From: https://www.diffen.com/difference/Ethics_vs_Morals

Ethics and morals relate to “right” and “wrong” conduct. While they are sometimes used interchangeably, they are different: ethics refer to rules provided by an external source, e.g., codes of conduct in workplaces or principles in religions. Morals refer to an individual's own principles regarding right and wrong.

No, you are using a distinction some people make, but I'm using the most common usage by far: 'unethical' means the same as 'immoral' and so on.

But no matter, this is not the point, as I am not of course talking about codes of conduct in workplaces of religion, nor about an individual's own moral beliefs (well, sometimes indirectly when it comes to motivation). I'm talking about a sense by which humans assess what is immoral, morally obligatory, etc. So, if you or other people use a different terminology, that does not affect my points.
 
Wiploc said:
You think that retribution, in the total absence of any other benefit, is still in itself a benefit.
As long as it is just retribution, yes. Else, no. I will construct some simple scenarios below.


Wiploc said:
I can't give you that. If I take away the quotation marks, then I'm granting you your point by making the words meaningless.
Words have meaning, and adding quotation marks does not cancel their meaning.


Wiploc said:
But, back to our own discussion, I don't see how harm can be fitting or deserved if it doesn't accomplish anything. And I don't see how harm (retribution) without side effects (like rehabilitation) can be fitting or deserved.
Well, then, let me try an example. Imagine that Bob and Jack people are marooned in a deserted island. There is no hope for them to return to civilization, and they both know it (it happened in the year 500 and they are in the middle of nowhere, in a place no one goes to where they got by accident in a freak storm, or they are from our time but were taken by aliens to another planet and abandoned on that planet, or whatever). Jack is a serial killer.

Scenario 1: Jack takes Bob by surprise. He hits him in the head, and when Bob is trying to get up, Jack stabs him repeatedly, and cuts him in many places. He laughs as Bob dies in a pool of his own blood. Jack lives the rest of his days on the island, alone. But he likes being alone - he hates people - and he enjoys recalling how he murdered his victims, the last one of which was Bob.

Scenario 2: Like Scenario 1 until Bob is dying in a pool of blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is fatally wounded, and a few minutes later, he dies as well.

In both scenarios, bad things happen. More precisely, Jack evilly kills Bob. But which world is less bad? (the rest of the world is not affected in any way morally relevant; there are some particles in different places but nothing more). I would say the world of Scenario 2 is less bad than the world of Scenario 1. Jack committed murder for fun in both cases, but in Scenario 1, he got to enjoy it for the rest of his life, whereas in Scenario 2, he did not, but instead, he got punished as he deserved.

So, there is no difference in rehabilitation, deterrence, or anything. What makes the world of Scenario 2 better? That Jack suffers and dies as he deserves. That is why just retribution is a net positive.

Now you might say that at least Bob got to feel like he had done justice, whereas in Scenario 1 he did not have that. If you think that that makes Scenario 2 better (i.e., less bad), then no problem. Here's scenario 3:

Scenario 3: Like Scenario 1 until Bob is dying in a pool of blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is wounded. However, while Bob thought he had fatally wounded Jack, in fact the wound is a flesh wound, and not that serious. Jack recovers, and lives out the rest of his life on the island, alone. But he likes being alone - he hates people - and he enjoys recalling how he murdered his victims, the last one of which was Bob.

And again, Scenario 2 is less bad than Scenario 3. The difference? Justice.


Wiploc said:
It's pure harm, with no benefit. It is unadulterated badness.
It is just harm, and it is a benefit in an of itself. It is unadulterated goodness. Example: see the two scenarios above.


Wiploc said:
I think retribution was probably sometimes on-balance good back before we thought about it meant. But when I read that the four reasons for punishment are rehabilitation, isolation, deterrence, and vengeance, that made sense to me.
But as I said before, take a look at people demanding justice for their loved ones, murdered by a murderer. Or rape victims demanding justice against the perpetrator. They generally want just retribution. They may well also want deterrence, or isolation. But their main motive is not that. Just look at their actions. It is just retribution. And most humans would seek it even if no other goal can be attained.


Wiploc said:
When we dissect punishment, vengeance is the road-rage part, the malice, the desire to hurt out of anger and self-righteousness. It is the bad part.
When we look at just retribution, that is a part of morality. It is a good thing. Scenario 2 above is clearly the less bad of the three.

Wiploc said:
You think "justice" is an ordinary concept that everyone should understand. I think it is controversial, blurred and rimless, that has been disputed by experts for millennia.
You believe the same is the case of concepts like 'morally wrong', or 'morally obligatory', etc.?
If not, what is the difference? Both are ordinary concepts. Just Google "justice for my daughter", or "justice for my son" or similar expressions.

Wiploc said:
And I don't see any way -- once we set aside rehabilitation, isolation, and deterrence -- that what is left can ever be fitting, deserved, or just.
What do you think of the scenarios I constructed above? You can easily construct a gazillion like those if you so choose.
 
And the difference between free choices and unfree choices is that the latter involve external coercion or internal compulsion.

Well then I'm afraid your definition is, imo, a complete fudge akin to either calling the tip of an iceberg an iceberg, or the natural universe god, or saying that the sun goes around the earth, and it (your definition) may be the nub of all our disagreements about free will and possibly make it pointless for us to explore in detail, so apologies for not dealing with all your recent posts.

You have a watertight, logical, semantically correct definition of free will and so what you are calling free will can exist.

It's just that it does not seem to actually describe what is going on, and as such is either inaccurate, mislabelling, merely colloquial, folk-psychological, and/or effectively meaningless.

Thank you for discussing free will with me. It was enjoyable and civilised, and we've given it a good go without coming to blows, but I think we may have to agree to disagree.

We can still try to discuss morality instead. :)
 
Last edited:
What happens is that, as Alice and Bob are not identical, their brains will not yield the same verdicts about everything. For example, maybe Bob finds Mary sexually attractive, but Alice does not find Mary sexually attractive. That of course does not need to imply any malfunctioning. On the other hand, they yield the same verdict in the bear-counting case, and in many, many others - save for malfunctioning. P2 is the hypothesis that this phenomena that holds for many, many subsystems, in humans, and in computers, and in many other systems also holds for the moral sense.

I hope to respond to the rest of your post later. For now, the above caught my attention.

Why do you not suggest that there is a universal for 'sexually attractive'?

I think you may have hit a nail on the head there. The number of bears is an objective, empirically measurable fact about the world outside your head. Sexual attraction and morality are merely differing mental judgements.

Unless you are assuming a universality for one but not the other, which would seem to be (a) assuming conclusions and (b) being inconsistent when doing so.
 
Last edited:
Back
Top Bottom