• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Is Ockham's Razor Bullshit?

I think there is little doubt that Occam's Razor (or some related concept such as parsimony) is both a very beneficial and widely used heuristic in everyday life. Whether the same is as true for science, I don't know. I'm not a scientist myself. My understanding is that many scientists either don't use it or are unaware of it, but that could be because they just aren't clued in. My daughter, who has a masters in biology, had never heard of it, or indeed parsimony (which was a bit of a disappointing surprise even from a use of English language point of view). Like many science students, and indeed working scientists, as far as I am aware, her exposure to the philosophy of science was minimal.

I think Politesse is mainly talking about the latter (science).

And of course the role of parsimony in science (I reckon it's gotta have at least some role) may differ between sciences.

But in some (emphasis some) ways, I wonder if the formality and rigour of science was invented/adopted precisely to get away, at least as much as possible, or where it helped, from our everyday intuitive (evolved) application of things like Occam's Razor.
 
I think there is little doubt that Occam's Razor (or some related concept such as parsimony) is both a very beneficial and widely used heuristic in everyday life. Whether the same is as true for science, I don't know. I'm not a scientist myself. My understanding is that many scientists either don't use it or are unaware of it, but that could be because they just aren't clued in. My daughter, who has a masters in biology, had never heard of it, or indeed parsimony (which was a bit of a disappointing surprise even from a use of English language point of view). Like many science students, and indeed working scientists, as far as I am aware, her exposure to the philosophy of science was minimal.

I think Politesse is mainly talking about the latter (science).

And of course the role of parsimony in science (I reckon it's gotta have at least some role) may differ between sciences.

But in some (emphasis some) ways, I wonder if the formality and rigour of science was invented/adopted precisely to get away, at least as much as possible, or where it helped, from our everyday intuitive (evolved) application of things like Occam's Razor.

Almost every scientist uses it regularly. As I said in a prior post, it is applied in the most basic inferential statistics that are used in all sciences that collect quantitative measures. Plus, most scientists apply it in their peer reviews, rejecting papers that violate the principle. And even during initial theory formation, scientists start with what is known and try to add as little as possible to account for any anomalies not already explained. It is true that many scientists don't even realize that they are applying the principle and misconstrue what the principle is. Yet, it is so foundational to scientific reasoning that they do it implicitly w/o realizing it. For example, it's centrality to significance testing and mathematical modeling is rarely explicitly discussed.

I'd go as far as to say that it isn't possible to practice actual science without it, and that any discipline where the principle is rejected by a large % in the field is not only not a science but not a field of rational inquiry.
 
I'd go as far as to say that it isn't possible to practice actual science without it, and that any discipline where the principle is rejected by a large % in the field is not only not a science but not a field of rational inquiry.

Even though that's rather a bold claim (or two bold claims to be exact), I think I can agree with it (or both) in principle, because it strikes me as very, very unlikely that Occam's Razor/parsimony play no role in (or can be eliminated from) science, on the basis that science is done by humans, a species whose cognition has evolved to heavily rely on it at a very fundamental level (see: the brain as a prediction machine). In other words it's akin to how we are 'programmed' (by which I mean to allow for both nature and nurture/learning).

So I definitely wouldn't rule it out. I might be unsure as to how much to rule it in (vis-a-vis the extent of it's role in science I mean), because my remark that I see science as trying, in at least some ways, to get away from relying on intuitive heuristics such as Occam's Razor, still rings, er, intuitively true, to me.

To be honest, I'm too out of my depth (I don't myself do science) to be able to either challenge or concur with what you've been saying. In other words, I'm not intuitively convinced but I accept that might be the case because I don't know enough.

-----------------------------------------------------

So, if you accept that I'm just broadly curious, and not making a strong claim of my own, when you say that almost every scientist uses it regularly, could you outline a typical or common example? I think it would need to be pretty simple, if possible. An example from quantum physics is likely to go over my head. But I'll leave it up to you. So long as it isn't too complicated. I am not familiar with curve-fitting but in principle I should be able to grasp a simple example because my general maths (including statistics and probability) is reasonably good. That said, confidence intervals (which I think I grasp in principle), along with things like p-values, confuse me somewhat when I see their application in the results section of many scientific papers.
 
Last edited:
On another tack, here's an example of something else, where parsimony later proved to be amiss:

(Note that this does not imply that parsimony is nevertheless a useful probabilistic principle; a counter-example would not suggest that, by any means).

In the mid to late 1800's, there were, I read (see link below), two theories that explained the fact that closely-related species, and/or a significant percentage of the same species, could be found on geographically very separate land masses on the planet.

Darwin's explanation was dispersal (which as I understand it involved the possibility of individual members of one species drifting, in one way or another, over large distances, which although considered improbable, was considered capable of achieving sufficient probability given enough time, of which there was plenty, given the slow pace of most evolutionary processes).

The alternative explanation for the same phenomena (or at least some of them) involved an additional factor, the idea that the two relevant geographical land masses might have been joined together at one time, that some same or similar species once lived together but were later separated over very long distances. The latter component came from suggestions by early geologists, of whom Darwin was apparently scathing:

"If there be a lower region for the punishment of geologists, I believe ...you will go there. Why, your disciples in a slow and creeping manner beat all the old Catastrophists who ever lived".

By the 1950's, the underlying idea of continents (rather than or as well as organisms) drifting over large distances (up until that time considered very controversial, apparently as intuitively controversial to Darwin as that there had ever been a global flood event or similar worldwide catastrophe), was accepted as part of the explanation for the distribution of some of the species in question.

Occam’s Razor in science: a case study from biogeography
https://www.researchgate.net/public...graphy/link/56b36f6408ae156bc5fb283d/download

(Full pdf can be freely downloaded).

In other words, if most evolutionary biologists up to the 1950's were using a dispersal model to explain certain phenomena, then parsimony had, in hindsight, led them astray for about a hundred years.
 
Last edited:
I'd go as far as to say that it isn't possible to practice actual science without it, and that any discipline where the principle is rejected by a large % in the field is not only not a science but not a field of rational inquiry.

Even though that's rather a bold claim (or two bold claims to be exact), I think I can agree with it (or both) in principle, because it strikes me as very, very unlikely that Occam's Razor/parsimony play no role in (or can be eliminated from) science, on the basis that science is done by humans, a species whose cognition has evolved to heavily rely on it at a very fundamental level (see: the brain as a prediction machine). In other words it's akin to how we are 'programmed' (by which I mean to allow for both nature and nurture/learning).

So I definitely wouldn't rule it out. I might be unsure as to how much to rule it in (vis-a-vis the extent of it's role in science I mean), because my remark that I see science as trying, in at least some ways, to get away from relying on intuitive heuristics such as Occam's Razor, still rings, er, intuitively true, to me.

To be honest, I'm too out of my depth (I don't myself do science) to be able to either challenge or concur with what you've been saying. In other words, I'm not intuitively convinced but I accept that might be the case because I don't know enough.

-----------------------------------------------------

So, if you accept that I'm just broadly curious, and not making a strong claim of my own, when you say that almost every scientist uses it regularly, could you outline a typical or common example? I think it would need to be pretty simple, if possible. An example from quantum physics is likely to go over my head. But I'll leave it up to you. So long as it isn't too complicated. I am not familiar with curve-fitting but in principle I should be able to grasp a simple example because my general maths (including statistics and probability) is reasonably good. That said, confidence intervals (which I think I grasp in principle), along with things like p-values, confuse me somewhat when I see their application in the results section of many scientific papers.

I agree that science tries to get away from mere "intuitive" heuristics, but it need not be applied so informally. It can and is applied mathematically. Mathematical modeling is basically instantiating theories as equations. To produce a non-linear relationship you need a more complex equation with more parameters that represent the changes in the strength and/or direction of the relationship over values of X. For example a U-shaped function assumed not only nonlinear growth but then at some point the relationship no longer exist and then reverses direction. For example, the quadratic function y = ax2 + bx + c has more parameters/assumptions needed to derive a Y value from an observed X, than the linear function y= bx + c. Any respected journal (and almost all scientist in a field) would reject a paper that merely shows the quadratic equation accounts for significant variance in Y, given X. They would require that it can also predict significantly more variance in y given a set of observed x values than a simpler linear equation (or a simpler type of non-linear).

And how much more variance you need to explain depends upon how many more parameters/assumptions the model includes than the alternatives. That's because each parameter reduces the degrees of freedom used to test whether the increase in variance explained is beyond chance levels. The more added parameters, the lower the degrees of freedom and thus the greater the increase in explained variance required to be beyond chance levels.

That latter point about chance relates to the p values you mentioned, and is also a manifestation of the parsimony principle. P-value is the probability that you would get the observed results (e.g., that size of group difference, strength of correlation, increase in variance explained) or stronger just by random sampling error, even if the null hypothesis was true (the null being that there is no real difference/relationship). The p value is computed based upon the observed data, using the observed variance and distribution of individual scores on each variable to estimate how often random samples of the observed size would yield those results. Since we know that all samples of observations contain sampling error (random deviations from the total pop of all possible observations), sampling error is the simplest and most plausible explanation for any observed pattern, unless it can be shown by these inferential statistics that the probability of sampling error producing those results is below some threshold (typically 5% of .05 fame). Though, you need to lower that threshold for each additional hypothesis you test on the same data since 1 in 20 effects due to chance will fall below a 5% threshold.

Note that all alternative methods that don't use p-values and significant tests (e.g, confidence intervals around effect sizes) still apply the same underlying logic where sampling error is taken into account and the data must show effects larger than is likely to due to sampling error.
 
I don't see that we're making progress here.

Can you offer an example of how Sara should empirically validate whether Joe stole her money?

She should construct her model based on known facts. They don't necessarily have to be about Joe specifically.
Then how would anyone be able to conclude that Joe did or did not steal her money if at least one fact was not specifically about Joe?
 
On another tack, here's an example of something else, where parsimony later proved to be amiss:

(Note that this does not imply that parsimony is nevertheless a useful probabilistic principle; a counter-example would not suggest that, by any means).

In the mid to late 1800's, there were, I read (see link below), two theories that explained the fact that closely-related species, and/or a significant percentage of the same species, could be found on geographically very separate land masses on the planet.

Darwin's explanation was dispersal (which as I understand it involved the possibility of individual members of one species drifting, in one way or another, over large distances, which although considered improbable, was considered capable of achieving sufficient probability given enough time, of which there was plenty, given the slow pace of most evolutionary processes).

The alternative explanation for the same phenomena (or at least some of them) involved an additional factor, the idea that the two relevant geographical land masses might have been joined together at one time, that some same or similar species once lived together but were later separated over very long distances. The latter component came from suggestions by early geologists, of whom Darwin was apparently scathing:

"If there be a lower region for the punishment of geologists, I believe ...you will go there. Why, your disciples in a slow and creeping manner beat all the old Catastrophists who ever lived".

By the 1950's, the underlying idea of continents (rather than or as well as organisms) drifting over large distances (up until that time considered very controversial, apparently as intuitively controversial to Darwin as that there had ever been a global flood event or similar worldwide catastrophe), was accepted as part of the explanation for the distribution of some of the species in question.

Occam’s Razor in science: a case study from biogeography
https://www.researchgate.net/public...graphy/link/56b36f6408ae156bc5fb283d/download

(Full pdf can be freely downloaded).

In other words, if most evolutionary biologists up to the 1950's were using a dispersal model to explain certain phenomena, then parsimony had, in hindsight, led them astray for about a hundred years.

A couple things about that. First, it is of course true that simpler theory is not always the correct one. That isn't the principle. The principle doesn't assume what is true in a given instance, but what is more likely to be true more of the time across all instances. It's about what to assign a higher probability to when you don't have enough information to rule either out or conclude one must be the case. That's not a flaw of the principle, since there is no such thing as a basis for a rational conclusion that doesn't sometimes lead to the wrong conclusion. Which is why certainty is impossible in rational thought and new data must always be incorporated to insure that such errors are temporary and the more complex theory becomes favored when new data shows its added complexity is needed. Even direct "empirical validation" is sometimes wrong in the conclusions it supports, as is the standard statistical approaches I describe in my prior post. Most of the time, an observed relationship where the data cannot rule out chance turns out to have been real and not due to chance. But that will be discovered if you make enough observations. Whereas, you will never discover your error in accepting fake relationships as real, if you don't evaluate the data against the parsimonious chance explanation. IOW, parsimony makes errors that are self correcting over time within science, whereas ignoring the principle of parsimony leads to permanent errors w/o a correcting mechanism.

In addition, it is debatable whether dispersion over large distances was ever more parsimonious than continental drift. Sure we knew creatures could move, but the assumption they could move such distances is another matter and arguably as unfounded at the time as speculation of continental drift. In fact, I suspect geologist had supporting data than Darwin was simply ignorant of, such as similar geography and soil between the same locations as the similar organisms. That would have made the drift account more powerful in the amount of data it could explain and predict. Thus, if it's extra assumptions are neccessary to explain more data, then it doesn't actually violate the principle of not including more assumptions than needed.
 
Sure, applying it to a rigorously contested debate would likely be misplaced, I don't think that's what it's commonly used for. It's not meant to replace serious analysis for significant problems, it's just a thinking tool that can be used if/when you find appropriate.

So if, for example, I hear a loud noise in my house at 2 am I can likely use Occam's Razor to assume my cat just did something, rather than that someone broke in and I'm about to be murdered. But if you also wanted to completely ignore it as a tool that would be valid too. And not taking it seriously as a tool in certain types of debates would definitely be valid as well.

Nobody ever said it's supposed to stand up to some kind of objective standard, all of the time.

Then you have clearly never debated the existence of God in a public forum. Not only is that specific debate the most common application of Ockham's Razor in the real world, I seldom hear it applied to anything else quite frankly. Observe most of its uses in this very thread. I kind of give props to Jokodo for coming up with something novel for once.

To dig back in the thread a little, this comment surprised me Poli. Perhaps it is merely an exaggeration for effect. I have strong memories of the pleasure I had when first read Kuhn's The Structure of Scientific Revolutions back about fifty years ago. He made the point that some of the major revolutions in science such as the Copernican revolution in astronomy or Dalton's creation of the atomic hypothesis in chemistry, not to mention Darwin, succeeded, not because they were necessarily more accurate than their predecessors they displaced, but because they were simpler and more direct. The Copernican system, for example, couldn't make truly accurate predictions until modified by Kepler's observations confirming that planetary orbits were ellipses. Yet the Copernican system won converts from the start because of its elegant simplicity.

And didn't Feynman write somewhere about elegant simplicity being a criterion for judging mathematical models, or am I confusing him with someone else?

I really hadn't heard the argument applied to God's existence until I came to IIDB, except of course for Laplace's famous "I have no need for that hypothesis."
 
To dig back in the thread a little, this comment surprised me Poli. Perhaps it is merely an exaggeration for effect. I have strong memories of the pleasure I had when first read Kuhn's The Structure of Scientific Revolutions back about fifty years ago. He made the point that some of the major revolutions in science such as the Copernican revolution in astronomy or Dalton's creation of the atomic hypothesis in chemistry, not to mention Darwin, succeeded, not because they were necessarily more accurate than their predecessors they displaced, but because they were simpler and more direct. The Copernican system, for example, couldn't make truly accurate predictions until modified by Kepler's observations confirming that planetary orbits were ellipses. Yet the Copernican system won converts from the start because of its elegant simplicity.

And didn't Feynman write somewhere about elegant simplicity being a criterion for judging mathematical models, or am I confusing him with someone else?

Feynman can find the universe elegant and beautiful all he likes, I do not object to finding aesthetic beauty in science. But such beauty is not the basis of science. If you are saying Copernicus' model of the solar system was correct because it was elegant, and that he used Ockham's Razor to decide which idea was more elegant-ish, I fundamentally disagree with you. Not just aesthetic but, beyond any shadow of a doubt, political and religious tensions strongly influenced the popularity of Copernicus' work and its circulation among the esoteric set in Reformation-era Europe, but that did not make it any more or less correct as a description of the Solar System. Indeed, most scientists were not as enamored of heliocentrism until nearly the 18th century, after many of the necessary corrections had been made, making for a much less elegant systematic description. We have a model of the solar system now that I would not call especially elegant. The system is quite lopsided, those nasty planets making NO effort to coordinate or generate perfect geometric shapes or even distances, always moving and shifting and messing around with each others' orbits, birthing and swallowing other celestial bodies, running into things. I find the system beautiful in its way, but I would not call our current, more accurate model more simple or direct than what Copernicus proposed. His model wins on both fronts. It just doesn't describe reality. Which, in my opinion, is much more important.
 
Poli's problem is that he keeps trying to find a small niche somewhere to fit in a God hypothesis, and he is tired of being told by us arrogant know-it-all atheists that Occam's Razor has cut them all away. :p

Given competing arguments, Occam's razor never worked to guarantee the truest outcome, only which one was most likely to be truest. It is possible that what appears to be an unnecessary step in an argument is actually necessary for some reason that has not yet been discovered. Monotheists are more likely to appreciate Occam than polytheists, but atheists appreciate him even more.
 
Poli's problem is that he keeps trying to find a small niche somewhere to fit in a God hypothesis, and he is tired of being told by us arrogant know-it-all atheists that Occam's Razor has cut them all away. :p

Given competing arguments, Occam's razor never worked to guarantee the truest outcome, only which one was most likely to be truest. It is possible that what appears to be an unnecessary step in an argument is actually necessary for some reason that has not yet been discovered. Monotheists are more likely to appreciate Occam than polytheists, but atheists appreciate him even more.

An interesting hypothesis on your part. Where did I say any such thing? It's true that atheists (in my experience) overuse the Occam's razor network of ideas as a vague source of authority in ontological debates. I find this dubious because I find the principle itself dubious, and a bit underhanded if the person then walks it back to "just a rule of thumb not to be taken too seriously" once its legitimacy as a scientific principle has been challenged.

I don't personally have a position on the God issue, as we have discussed many times in the past. I am a practicing Christian, but with strong syncretic influences and not one who greatly values belief for its own sake, and I am not offended by (for instance) metaphorical readings of theism that don't violate a materialist position. Anyone who knows me personally could easily attest to the fact that I am far more offended by lazy thinking than I have ever been by a theological disagreement. Different religions just make the world more fun and interesting. But lazy thinking makes every philosophy a danger to life and limb.

Aren't most atheists renegade monotheists, at least on the down-low? The God they imagine (and reject) generally looks a lot like the Christian one in their descriptions thereof. Who admittedly looks suspiciously like Zeus, but that's another discussion.
 
Poli's problem is that he keeps trying to find a small niche somewhere to fit in a God hypothesis, and he is tired of being told by us arrogant know-it-all atheists that Occam's Razor has cut them all away. :p

Given competing arguments, Occam's razor never worked to guarantee the truest outcome, only which one was most likely to be truest. It is possible that what appears to be an unnecessary step in an argument is actually necessary for some reason that has not yet been discovered. Monotheists are more likely to appreciate Occam than polytheists, but atheists appreciate him even more.

An interesting hypothesis on your part. Where did I say any such thing? It's true that atheists (in my experience) overuse the Occam's razor network of ideas as a vague source of authority in ontological debates. I find this dubious because I find the principle itself dubious, and a bit underhanded if the person then walks it back to "just a rule of thumb not to be taken too seriously" once its legitimacy as a scientific principle has been challenged.

I don't personally have a position on the God issue, as we have discussed many times in the past. I am a practicing Christian, but with strong syncretic influences and not one who greatly values belief for its own sake, and I am not offended by (for instance) metaphorical readings of theism that don't violate a materialist position. Anyone who knows me personally could easily attest to the fact that I am far more offended by lazy thinking than I have ever been by a theological disagreement. Different religions just make the world more fun and interesting. But lazy thinking makes every philosophy a danger to life and limb.

Aren't most atheists renegade monotheists, at least on the down-low? The God they imagine (and reject) generally looks a lot like the Christian one in their descriptions thereof. Who admittedly looks suspiciously like Zeus, but that's another discussion.

That's probably true in the US, but is an inevitable consequence of the fact that, until recently, atheists were a negligible part of the population, while monotheists were the overwhelming majority.

In Europe, where third and even fourth generation atheists are now commonplace, most atheists view both monotheism and polytheism equally dimly.

Personally, I am not a renegade anything - I come from a line of atheists that pre-dates WWII (my grandmother was infamous in the East End of London for shaming the local vicar into lifting rubble rather than praying after air raids, at a time when almost nobody ever dared to question the behaviour of the clergy).

It wasn't until I was in my teens that I realised that there are some people who actually take Christianity seriously. It shocked me, as I had assumed that it was just another thing the grownups use as a story for little kids, like Santa Claus. That adults actually take the whole thing seriously still strikes me as bizarre. And I suspect that is true of anyone who wasn't ever told to believe by parents or grandparents.
 
I suppose what I mean is that most atheists I have ever met, at least, have an enormous amount of investment in discrediting monotheism specifically; they have studied Christian history, they can quote huge portions of the Protestant Bible, they know all the popular medieval European theologians and can recite rebuttals to all their central claims. Whereas their rejection of other gods (or other quasi- or non-theistic perspectives) is a lot more vague. Perhaps more heartfelt, but not a major concern. We all belong to a certain historical moment and context.

If you disagree that's okay, I think that's wandering a bit far from the main topic of the thread.
 
Poli's problem is that he keeps trying to find a small niche somewhere to fit in a God hypothesis, and he is tired of being told by us arrogant know-it-all atheists that Occam's Razor has cut them all away. :p

Given competing arguments, Occam's razor never worked to guarantee the truest outcome, only which one was most likely to be truest. It is possible that what appears to be an unnecessary step in an argument is actually necessary for some reason that has not yet been discovered. Monotheists are more likely to appreciate Occam than polytheists, but atheists appreciate him even more.

An interesting hypothesis on your part. Where did I say any such thing? It's true that atheists (in my experience) overuse the Occam's razor network of ideas as a vague source of authority in ontological debates. I find this dubious because I find the principle itself dubious, and a bit underhanded if the person then walks it back to "just a rule of thumb not to be taken too seriously" once its legitimacy as a scientific principle has been challenged.

In our many past discussions, you have expressed animosity towards the attitudes of atheists, and you have tended to position yourself as an extremely nuanced theist. I suspect that you find the principle dubious here more because you find yourself surrounded by atheists. I doubt that you could operate sanely without relying on it for most of your daily needs in the reasoning department. It is our principal means of dealing with uncertainty. We simply can't know what is true in an absolute sense, so it is important to be able to act on what we consider likely to be true.

I don't personally have a position on the God issue, as we have discussed many times in the past. I am a practicing Christian, but with strong syncretic influences and not one who greatly values belief for its own sake, and I am not offended by (for instance) metaphorical readings of theism that don't violate a materialist position. Anyone who knows me personally could easily attest to the fact that I am far more offended by lazy thinking than I have ever been by a theological disagreement. Different religions just make the world more fun and interesting. But lazy thinking makes every philosophy a danger to life and limb.

Yes, I know that you take a very nuanced stand on the God issue, but the fact is that you do take a position on the subject. Otherwise, you wouldn't bother calling yourself a "practicing Christian" at all. FTR, I would never call you a lazy thinker. If anything, I would say that you tend to overthink your positions on the God issue. Atheists can afford to be lazier thinkers, since they don't have as much work to do.

Aren't most atheists renegade monotheists, at least on the down-low? The God they imagine (and reject) generally looks a lot like the Christian one in their descriptions thereof. Who admittedly looks suspiciously like Zeus, but that's another discussion.

Even Richard Dawkins admits to being a cultural Anglican, so there is nothing surprising about the monotheistic bias of modern atheists. They simply focus on the version of theism that they usually have to deal with. However, as bilby has pointed out, it is not uncommon to come across atheists who were never theists during childhood. I was raised as an Episcopalian, so I had to earn my disbelief the hard way--by conscious rejection of belief. As a consequence, I tend to avoid the lazy argument that theists have the burden of proof in religious arguments, although I still believe that they do. I just think that it isn't worth the trouble to try to convince a theist that God is an unnecessary hypothesis, unless that theist is willing to entertain the possibility.
 
Simplest explanation - the universe just exists.
(Keeps all the super-intelligent atheists/materialists/determinists happy)

Unnecessary additional complication - why does the universe exist?
(A question asked by all of us foolish simpletons)

I would argue that Ockam requires us to answer the existential why questions for the very reason that the atheistic hypothesis which leaves unanswered questions is actually less elegant - more confusing/complicated - than the neat, tidy, God delusion conclusion.

To Ockham's Razor, the unanswered why question is like an annoying loose thread which must be attended to.

loose_thread.png
 
I suppose what I mean is that most atheists I have ever met, at least, have an enormous amount of investment in discrediting monotheism specifically; they have studied Christian history, they can quote huge portions of the Protestant Bible, they know all the popular medieval European theologians and can recite rebuttals to all their central claims. Whereas their rejection of other gods (or other quasi- or non-theistic perspectives) is a lot more vague. Perhaps more heartfelt, but not a major concern. We all belong to a certain historical moment and context.

If you disagree that's okay, I think that's wandering a bit far from the main topic of the thread.

I have an investment in discrediting whatever unevidenced nonsense people are trying to inflict on my life.

By coincidence of birthplace, that means various flavours of Christianity.

If people try to make laws that limit my freedom because of their Buddhist or Sikh beliefs, then I shall oppose them in the same way - but right now, the impact of those religions on my life is negligible. If the same were true of Christianity, I wouldn't waste my time on that nonsense, either.
 
Simplest explanation - the universe just exists.
(Keeps all the super-intelligent atheists/materialists/determinists happy)

Unnecessary additional complication - why does the universe exist?
(A question asked by all of us foolish simpletons)

I would argue that Ockam requires us to answer the existential why questions for the very reason that the atheistic hypothesis which leaves unanswered questions is actually less elegant - more confusing/complicated - than the neat, tidy, God delusion conclusion.

To Ockham's Razor, the unanswered why question is like an annoying loose thread which must be attended to.

View attachment 27410

If there were no unanswered questions once we answer "God" to all the atheistic "we don't yet know"s, then how do you explain the fact that we no longer live like medieval peasants?

God isn't an answer; It's the rhetorical equivalent of "shut up". As a response to any question, it's at best valueless, and more often pretty damn rude.
 
In our many past discussions, you have expressed animosity towards the attitudes of atheists, and you have tended to position yourself as an extremely nuanced theist.
I have animosity towards certain views of certain atheists, and they are similar in basic character and reasoning to the attitudes I dislike when outpouring from some people from all or many other religious and philosophical traditions. I don't like hypocrisy, I don't like bigotry, and I don't like lazy thinking. That doesn't describe all atheists any more than it describes all theists.

But I do hold atheists to a higher standard, I think. If you're going to stake your claims in the spiritual world expressly and entirely on your superior logic and rationality, I think that logic should be all the more subject to question. I can't really argue with someone about whether they spoke to God last Tuesday, that claim is not easily falsifiable. Even if I hooked the person with such a vision up to a fMRI and somehow proved to my own satisfaction that they are schizophrenic, it would be silly and pointless to expect them to agree, as they almost certainly never agreed that science was a reasonable basis on which to challenge their claim in the first place, nor to accept the results of such a test as valid. But if someone tells me "a scientific mind will always reach the same conclusions as me", that begs to be deconstructed. If they are what they say they are, and value what they say they value, they've no business continuing to make such a claim if it can be shown to have no basis. I'm a scientist at heart as well as in vocation, and I care about appeals to her authority, especially in what to me are dubious circumstances.

It is our principal means of dealing with uncertainty. We simply can't know what is true in an absolute sense, so it is important to be able to act on what we consider likely to be true.
An interesting point. I can grasp the emotional solace that could be derived from such a philosophy. But just because a belief brings consolation to your worried brow doesn't oblige me to also believe or use the same principle. Even if truly universal, as you seem to be claiming, not all cognitive biases are necessarily trustworthy guides to the unknown simply by dint of existing, and filling functional and psychological needs. Ethnocentrism, for instance, exists in all humans to some degree as far as we know, yet is a terrible guide to predicting the most probable behavior of others.

Yes, I know that you take a very nuanced stand on the God issue, but the fact is that you do take a position on the subject. Otherwise, you wouldn't bother calling yourself a "practicing Christian" at all.
Christianity is not, to me, a statement of belief in a particular flavor of God. I say I practice Christianity because I practice Christianity; it is the language in which my spirituality is conducted. Indeed, I reject the very European notion that religion is or should or could be defined as a set of particular philosophical propositions. This isn't and never was the heart of faith, only a game that philosophers and kings built around it. True faith is experiential, not rhetorical.

Atheists can afford to be lazier thinkers, since they don't have as much work to do.
The honesty is appreciated.

I find the work interesting, however, and see no reason to cease pursuing it. Of course, one could step aside before reaching the final chapter, but why would a bibliophile wish to? The well of spiritual experience is both deep and fascinating.
 
In our many past discussions, you have expressed animosity towards the attitudes of atheists, and you have tended to position yourself as an extremely nuanced theist. I suspect that you find the principle dubious here more because you find yourself surrounded by atheists. I doubt that you could operate sanely without relying on it for most of your daily needs in the reasoning department. It is our principal means of dealing with uncertainty. We simply can't know what is true in an absolute sense, so it is important to be able to act on what we consider likely to be true.

An interesting point. I can grasp the emotional solace that could be derived from such a philosophy. But just because a belief brings consolation to your worried brow doesn't oblige me to also believe or use the same principle. Even if truly universal, as you seem to be claiming, not all cognitive biases are necessarily trustworthy guides to the unknown simply by dint of existing, and filling functional and psychological needs. Ethnocentrism, for instance, exists in all humans to some degree as far as we know, yet is a terrible guide to predicting the most probable behavior of others.

The point I was talking about was that Occam's razor is a general principle that I think we all adhere to, because it is our principal method of eliminating an explosion of possible alternative hypotheses. Most of our beliefs are more or less certain, but we navigate reality as if some were beyond question. We have to narrow down the range of possible answers, and the only way to do that is to assume that the answer associated with the least amount of cognitive dissonance is correct. We don't think of all possible alternative explanations, because we suppress a tendency to pursue them. Occam's razor is just a succinct distillation of the way reasoning operates. Human cognition is fundamentally associative, so our deductions always follow the path of least resistance through a complex network of associations.

The problem with bringing up Occam's razor to a theist is that they make a lot of different assumptions about the nature of reality than I do. I don't find the existence of a spiritual (non-physical) plane of existence as at all plausible. So some kind of "goddidit" explanation produces less cognitive dissonance for theists than it does for me. There are all sorts of hidden assumptions that factor into our chains of reasoning. An "Occam's razor" argument assumes that we are basically working with the same set of assumptions about how physical reality and nature actually work. We aren't, so it is a bit silly to act as if we were. God isn't the only unnecessary hypothesis in a theological debate.
 
Back
Top Bottom