• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Moral Realism

Wiploc

Veteran Member
Joined
Dec 9, 2002
Messages
3,718
Location
Denver
Basic Beliefs
Strong Atheist
Turns out I'm a moral realist. I'm reading Sam Harris' The Moral Landscape: How Science Can Determine Human Values. I find it compelling.

Any objections to moral realism?
 
Turns out I'm a moral realist. I'm reading Sam Harris' The Moral Landscape: How Science Can Determine Human Values. I find it compelling.

Any objections to moral realism?
It would help if you could explain clearly and unambiguously what you mean by moral realism.
 
Philosophy is kind of my weak area, so here's what Wikipedia has to say:

http://en.wikipedia.org/wiki/Moral_realism (section on objections/criticisms: http://en.wikipedia.org/wiki/Moral_realism#Criticisms )

Criticisms
Several criticisms have been raised against moral realism: The first is that, while realism can explain how to resolve moral conflicts, it does not explain how these conflicts arose in the first place.[16] The Moral Realist would appeal to basic human psychology, arguing that people possess various motivations that combine in complex ways, or else are simply mistaken about what is objectively right.

Others are critical of moral realism because it postulates the existence of a kind of "moral fact" which is nonmaterial and does not appear to be accessible to the scientific method.[17] Moral truths cannot be observed in the same way as material facts (which are objective), so it seems odd to count them in the same category.[18] However, such an argument could be applied to saying that the science of psychology also cannot be a science; or the acceptance of psychology as a cognitive science vitiates this argument (which would not be indicative of any weakness of the argument, as Feynman in the Cargo Cult Science made the same claim starting from different hypothesis). One emotivist counterargument[by whom?] (although emotivism is usually non-cognitivist) alleges that "wrong" actions produce measurable results in the form of negative emotional reactions, either within the individual transgressor, within the person or people most directly affected by the act, or within a (preferably wide) consensus of direct or indirect observers.[citation needed]

Another counterargument comes from moral realism's ethical naturalism.[specify] Particularly, understanding "Morality" as a science addresses many of these issues.[citation needed]

Honestly, the objections sound kind of weak.
 
I object to the idea that there is any way to derive a normative statement from a factual one. Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
 
I object to the idea that there is any way to derive a normative statement from a factual one.

Sure.



Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.

If we want people to be happy ...

That's it. Some people will argue that virtues (honesty, loyalty, etc.) increase happiness. Some will claim that the road to happiness is following orders from an invisible eccentric.

What would be the point of any moral system that didn't tend to increase happiness/well-being? No point at all. With a moral system like that, there would be no reason for people to want to be moral.

Moral claims (though I'm told this isn't a very robust system of morality) tend to fall into two categories. Either you give up some personal happiness in order to achieve a greater increase in group happiness (do not steal), or you give up some current happiness in order to achieve a greater long-term happiness (brush your teeth).

Moral discussion doesn't involve a jump from is to ought. To have a moral discussion, you start at ought. Aircraft design doesn't start by asking, "Should man want to fly?" It already assumes that. The discussion is about how to achieve it. Cooking shows don't open with the question, "Should we want food to taste good and be nutritious?" They already assume that. The shows are about how to achieve it. Moral discussion doesn't start by asking, "Should we desire happiness?" It already assumes that. The discussion is about how to achieve it.

That, increasing happiness, is the realm of moral discussion.

Not everybody agrees with that goal. The exceptions are sociopaths, and we are happy to label them that, and continue with our discussion.
 
Sure.



Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.

If we want people to be happy ...

That's it. Some people will argue that virtues (honesty, loyalty, etc.) increase happiness. Some will claim that the road to happiness is following orders from an invisible eccentric.

What would be the point of any moral system that didn't tend to increase happiness/well-being? No point at all. With a moral system like that, there would be no reason for people to want to be moral.

Moral claims (though I'm told this isn't a very robust system of morality) tend to fall into two categories. Either you give up some personal happiness in order to achieve a greater increase in group happiness (do not steal), or you give up some current happiness in order to achieve a greater long-term happiness (brush your teeth).

Moral discussion doesn't involve a jump from is to ought. To have a moral discussion, you start at ought. Aircraft design doesn't start by asking, "Should man want to fly?" It already assumes that. The discussion is about how to achieve it. Cooking shows don't open with the question, "Should we want food to taste good and be nutritious?" They already assume that. The shows are about how to achieve it. Moral discussion doesn't start by asking, "Should we desire happiness?" It already assumes that. The discussion is about how to achieve it.

That, increasing happiness, is the realm of moral discussion.

Not everybody agrees with that goal. The exceptions are sociopaths, and we are happy to label them that, and continue with our discussion.

Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?
 
Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
Moral intuitions are founded upon a conditional?

They are founded upon value-laden statements which only apply on the condition that you have the same desires/goals/etc. When you say "you should not hurt small children," buried in that imperative are a bunch of assumptions that can be imagined as antecedents of a conditional clause. Just off the top of my head, they could be something like:

"If you want to avoid causing pain unless absolutely necessary, then..."
"If you want the future leaders of society to be generally free of emotional hangups, then..."
"If you want to live in a society where as few people as possible suffer from anxiety or depression, then..."

Implicit in each one of these possible antecedents is the empirical statement that posits a correlation between the two variables. For instance, the first antecedent, coupled with the consequent "you should not hurt small children," can be unpacked to represent the empirical statement:

"Hurting small children is likely to result in unnecessary pain." THAT'S the objective part, THAT'S the part we can do Sam Harris' brainwave experiments about, THAT'S the part that can be proven or disproven based on observations.

But notice that it stems from part of just one potential conditional clause. Also notice that it does not have any imperative weight; it's just a fact about the world. You can take it and say "since I want to avoid causing unnecessary pain, I will not hurt small children," or you could say "since I love causing unnecessary pain, I will start hurting small children more." Neither action can be adjudicated based on an impartial examination of the natural universe. And after all, it's the imperative weight of morality that makes it effective. Otherwise, it would have no ability to change people's behaviors. Without it, you would just have a set of neurological and/or socio-economic data over which to bemusedly pore. The behavior-motivating part of morality has to be normative, and so it can never be a mere fact about the state of reality.
 
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?

I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.
 
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?

I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.

To play devil's advocate for a moment: what if I disagree? You may call me a sociopath and ignore me, but that's different from refuting me with evidence, as is normally the case in disagreements about facts. As I said, following different starting principles to their logical conclusions may produce radically different moral precepts. It seems these precepts can only be considered truths if their starting principles are taken for granted.
 
Does moral realism treat taking your own life different from some one else's? If so, then explain in specific detail (lay out all the assumptions) which support this distinction and how it doesn't really just come down to a purely subjective feeling that people prefer to have control over their own death than to have some else control their death.
 
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?

I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.

To play devil's advocate for a moment: what if I disagree?

About what specifically? About wanting people to be happy?



You may call me a sociopath and ignore me, but that's different from refuting me with evidence, as is normally the case in disagreements about facts.

Most of us want people to be happy. You hypothetically don't. That's not a disagreement about facts. The rest of us are discussing morality, how to achieve a higher level of happiness, and you are talking about something else.

Go back to the aerodynamics example (assuming that happened in this thread): A bunch of us are trying to design a faster, safer, cheaper, more fuel-efficient airplane, and you say, "Eh, I'd rather travel by train," and wander off to do your own thing. No facts are in dispute. No refutation is needed.



As I said, following different starting principles to their logical conclusions may produce radically different moral precepts.

Agreed. For instance, some will disagree with my assumption that morality is about increasing happiness. But their moral intuitions can be challenged, shown to conflict with each other. (Mine too, doubtless.) When you work it out, get down to bedrock, achieve an internally consistent system, it's going to be about increasing happiness.

Yes, that was an opinion.

Here's an example of working thru conflicting intuitions. Suppose Joe thinks morality consists of obeying Jehovah, and I think it consists of increasing happiness. We can ask how Joe would feel if he knew that obeying Jehovah would cause universal misery. And we can ask me how I'd feel if I knew that increasing happiness would violate all of Jehovah's commandments. Presumably Joe would be conflicted. (In fact, he's likely to respond, "But that wouldn't happen...") Whereas I wouldn't be conflicted at all. (Who cares about the commandments of an invisible eccentric if they don't make people happy?)



It seems these precepts can only be considered truths if their starting principles are taken for granted.

You preferring to travel by train doesn't make our lift to drag ratio any less true.
 
Moral intuitions are founded upon a conditional?

They are founded upon value-laden statements <snip>
They're intuitions! They're not founded on statements at all. Do you think a chimp who gets ticked off when he gets a smaller reward for doing a trick than he just saw the chimp next to him get for the same trick is basing that reaction on a statement?!? Or do you think he isn't intuitively feeling it's unfair?

When you say "you should not hurt small children," buried in that imperative are a bunch of assumptions that can be imagined as antecedents of a conditional clause. Just off the top of my head, they could be something like:

"If you want to avoid causing pain unless absolutely necessary, then..."
"If you want the future leaders of society to be generally free of emotional hangups, then..."
"If you want to live in a society where as few people as possible suffer from anxiety or depression, then..."
No they aren't. Buried in that imperative is the opinion that you should not hurt small children whether you want to cause unnecessary pain or not. I'm not offering you advice, fer chrissakes! I'm telling you if you don't conform to my moral judgment you're being a dick.

Implicit in each one of these possible antecedents is the empirical statement that posits a correlation between the two variables. For instance, the first antecedent, coupled with the consequent "you should not hurt small children," can be unpacked to represent the empirical statement:

"Hurting small children is likely to result in unnecessary pain." THAT'S the objective part, THAT'S the part we can do Sam Harris' brainwave experiments about, THAT'S the part that can be proven or disproven based on observations.
But that is SO not what "you should not hurt small children" means. People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.
 
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?

I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.

To play devil's advocate for a moment: what if I disagree?

About what specifically? About wanting people to be happy?



You may call me a sociopath and ignore me, but that's different from refuting me with evidence, as is normally the case in disagreements about facts.

Most of us want people to be happy. You hypothetically don't. That's not a disagreement about facts. The rest of us are discussing morality, how to achieve a higher level of happiness, and you are talking about something else.

Go back to the aerodynamics example (assuming that happened in this thread): A bunch of us are trying to design a faster, safer, cheaper, more fuel-efficient airplane, and you say, "Eh, I'd rather travel by train," and wander off to do your own thing. No facts are in dispute. No refutation is needed.

There is a set of factual statements about aerodynamics that are true whether or not you want to design a better airplane. Those are analogous to the set of facts about human flourishing, happiness, and/or suffering that (should) inform our moral precepts. However, if you say "all airplanes should be shaped like x," that statement is only true if you agree that an airplane should be designed to meet a certain goal... which, of course, is uncontroversial, because an airplane is a relatively straightforward concept. In practice, you might as well be stating a fact, since the underlying assumptions are so basic. But the goal of morality is not uncontroversial. If you say "abortion is wrong," the set of assumptions one has to accept goes beyond a simple dictionary definition of morality. People who disagree about it are not analogous to one person talking about planes and another preferring ground travel. Both are talking about morality. They are simply starting from different foundations, whether cognitively or not.



As I said, following different starting principles to their logical conclusions may produce radically different moral precepts.

Agreed. For instance, some will disagree with my assumption that morality is about increasing happiness. But their moral intuitions can be challenged, shown to conflict with each other. (Mine too, doubtless.) When you work it out, get down to bedrock, achieve an internally consistent system, it's going to be about increasing happiness.

Yes, that was an opinion.

Which I happen to strongly disagree with, not hypothetically, but actually as a person. Am I a sociopath? More importantly, can you prove that I'm not talking about morality, but something unrelated like horseback riding?

Here's an example of working thru conflicting intuitions. Suppose Joe thinks morality consists of obeying Jehovah, and I think it consists of increasing happiness. We can ask how Joe would feel if he knew that obeying Jehovah would cause universal misery. And we can ask me how I'd feel if I knew that increasing happiness would violate all of Jehovah's commandments. Presumably Joe would be conflicted. (In fact, he's likely to respond, "But that wouldn't happen...") Whereas I wouldn't be conflicted at all. (Who cares about the commandments of an invisible eccentric if they don't make people happy?)

You're proving my point. The same moral precept, say "never work on Sunday," can be considered wrong under your moral system and right under his, based on starting intuitions. Therefore, "never work on a Sunday" by itself cannot be considered a fact about the world. Or, to put it another way...

It seems these precepts can only be considered truths if their starting principles are taken for granted.

You preferring to travel by train doesn't make our lift to drag ratio any less true.

:shrug:
 
Last edited:
Moral intuitions are founded upon a conditional?

They are founded upon value-laden statements <snip>
They're intuitions! They're not founded on statements at all. Do you think a chimp who gets ticked off when he gets a smaller reward for doing a trick than he just saw the chimp next to him get for the same trick is basing that reaction on a statement?!? Or do you think he isn't intuitively feeling it's unfair?

When you say "you should not hurt small children," buried in that imperative are a bunch of assumptions that can be imagined as antecedents of a conditional clause. Just off the top of my head, they could be something like:

"If you want to avoid causing pain unless absolutely necessary, then..."
"If you want the future leaders of society to be generally free of emotional hangups, then..."
"If you want to live in a society where as few people as possible suffer from anxiety or depression, then..."
No they aren't. Buried in that imperative is the opinion that you should not hurt small children whether you want to cause unnecessary pain or not. I'm not offering you advice, fer chrissakes! I'm telling you if you don't conform to my moral judgment you're being a dick.

Implicit in each one of these possible antecedents is the empirical statement that posits a correlation between the two variables. For instance, the first antecedent, coupled with the consequent "you should not hurt small children," can be unpacked to represent the empirical statement:

"Hurting small children is likely to result in unnecessary pain." THAT'S the objective part, THAT'S the part we can do Sam Harris' brainwave experiments about, THAT'S the part that can be proven or disproven based on observations.
But that is SO not what "you should not hurt small children" means. People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.

In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do. To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it. But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc. (all of which is part of having a society that enables the free exchange of ideas). Morality at its best is indeed advice, particularly when informed by evidence. Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.
 
In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do. To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it. But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc. (all of which is part of having a society that enables the free exchange of ideas). Morality at its best is indeed advice, particularly when informed by evidence. Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.

Unfortunately for your argument the same set of genes, twin one, can find itself in an entirely different set of circumstances, twin two. Each twin will derived a life philosophy, part of which will be a set of rules, soft or hard, for dealing with conditions. To suggest that the communist located twin will have a similar set of rules as the libertarian located twin strains the imagination. Resolving desires work depending on the situation in which one finds oneself. Not just those located in different places experiencing different life style but also at different times within a lifestyle set.

For instance where a young person resolves primarily his/her sexual desires in context of strong glandular environment an old person resolves primarily his/her desires to continue living absent, mostly, of any glandular components. Same person, different moralities.

Sure conditions in the external world influences ones choices within their, let's say motivational, contexts, but they don't determines which set of values one is going to base one's morality upon. That, it seems to me is driven by internal demands which are expressed in tendencies, desires, suited to ones human condition in life. So if we treat morality as an evolving thing, not getting more complete, just getting more appropriate for here and now, we should wind up with narrow bands of good or ought tos suited to age. Trying to treat morality as a group thing strains the mind resulting in inappropriate suggestions for specific individuals at every age.
 
In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do. To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it. But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc. (all of which is part of having a society that enables the free exchange of ideas). Morality at its best is indeed advice, particularly when informed by evidence. Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.

Unfortunately for your argument the same set of genes, twin one, can find itself in an entirely different set of circumstances, twin two. Each twin will derived a life philosophy, part of which will be a set of rules, soft or hard, for dealing with conditions. To suggest that the communist located twin will have a similar set of rules as the libertarian located twin strains the imagination. Resolving desires work depending on the situation in which one finds oneself. Not just those located in different places experiencing different life style but also at different times within a lifestyle set.

For instance where a young person resolves primarily his/her sexual desires in context of strong glandular environment an old person resolves primarily his/her desires to continue living absent, mostly, of any glandular components. Same person, different moralities.

Sure conditions in the external world influences ones choices within their, let's say motivational, contexts, but they don't determines which set of values one is going to base one's morality upon. That, it seems to me is driven by internal demands which are expressed in tendencies, desires, suited to ones human condition in life. So if we treat morality as an evolving thing, not getting more complete, just getting more appropriate for here and now, we should wind up with narrow bands of good or ought tos suited to age. Trying to treat morality as a group thing strains the mind resulting in inappropriate suggestions for specific individuals at every age.

What makes them inappropriate? The fact that they go against people's bodily urges? Well, that's the point! Doing what is right (however one may define it) quite often boils down to wresting control over your behavior from the "glandular environment."
 
I think Pyramidhead has the critique pretty well covered. There are many factual claims one can make to help explain why one feels about something. Those claims can be scientifically evaluated, but in the end the moral stance is just how you feel about something and having a detailed justification for those feelings doesn't make them any more real, objective, or valid. All actions are causally determined and ultimately the consequence of our "nature" interacting with our environment. So, the fact that an act is a product of our nature doesn't at all distinguish moral from immoral acts. So, morality cannot be objectively determined by the causes of acts. Every act is also in service of some desired goal by the actor and those goals are a product of their "nature". So the fact that an act serves a function of goal attainment is true of all moral and immoral acts. In the end, the only way to separate moral from immoral acts is to choose some subset of goals as "more worthy" than others. That can never be done on any basis but pure subjective feeling. Nothing outside of human subjective feeling cares whether a person is happy, whether the most people possible are happy, whether anyone is in pain, whether society is stable, whether the human race flourishes, or whether we act according to evolved instincts. In the end, we choose a goal over others based on pure emotion, then we decide how to best achieve that goal and we label acts that thwart that goals as "immoral". But if a person has as their goal to engage in that act and thus the best way for them to achieve that goal is to engage in that act, then they have engaged in the exact some process of determining the act is moral as we did in determining it is immoral. Thus, they cannot be objectively wrong nor we right. They are merely wrong relative to our foundational preferred goals.
 
They are founded upon value-laden statements <snip>
They're intuitions! They're not founded on statements at all. ... People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.

In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do.
Then we're definitely talking about different things, because I am sure as hell not talking about survival instincts and gene propagation. I'm talking about moral intuitions; what are you talking about? You appear to have an idee fixe that morality has to be goal-oriented; you're keyword searching to find a clue as to what goal I must think morality is the pursuit of; and you picked survival instincts and gene propagation because I used the keyword "chimp". I said "chimp" not as a codeword for Darwinian evolution but because chimps are living breathing proof that you can have moral intuitions without language, which means moral intuitions are not founded on statements and conditionals.

To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it.
"Nature, Mr. Allnut, is what we are put in this world to rise above." - The African Queen

The trouble with that viewpoint is that Katharine Hepburn was playing a religious wacko. There isn't any god defining what's above nature; and even if there were, his holding the opinion that X is above nature wouldn't make it so; there's a current thread on that topic. What the heck makes you think when you pick a purpose and find someone to share it with you're "going beyond" anything? Where are you going to get this purpose? Pure Reason? As Hume pointed out, reason is the slave of the passions. Your choice of a purpose to which you'll subjugate some of your Darwinian predispositions is driven by how others of your Darwinian predispositions react to your environment, same as the reactions of that angry chimp.

But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc.
Why is that the important part? Why does it need a purpose at all? That sounds like a recipe for a "The end justifies the means" morality.

Morality at its best is indeed advice, particularly when informed by evidence.
At its best? At its best by what standard? A moral standard? Is "Don't hurt that kid if you want to avoid unnecessary suffering." morally better than "Don't hurt that kid."?

Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.
I don't disagree; the question isn't whether to stop at that point, but where to go from there. Talking ourselves into believing categorical imperatives are hide-and-seek-playing hypothetical imperatives is a dead end.
 
Back
Top Bottom