• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Herrnstein and Murray on "Intelligence Besieged"

Well, God bless you for the work you do.
Thanks. You don't need to sympathize with it or agree with the theory, but I would appreciate the assumption of good will. Too many people think they can read my mind, and they see nothing but evil. I know my own mind and it is merely MOSTLY evil.

You give yourself too much credit.
 
Togo, if you think plausibility matters and prior probability does not, then please explain the difference.
 
Togo, if you think plausibility matters and prior probability does not, then please explain the difference.

Here are links to the definitions.

http://dictionary.reference.com/browse/plausibility

http://dictionary.reference.com/browse/prior+probability

knock yourself out.
I have my answer. In the context of epistemology, there is no difference. You can feel free to provide your answer, but I do not suggest you rely on common English dictionaries.
 
Togo, if you think plausibility matters and prior probability does not, then please explain the difference.

I'm not sure defining those terms is helpful. The problem, is that you are taking a mathematical theory about how probability of a chain of events is effected by some of those events being resolved, and using it as an excuse to treat your own a priori beliefs as being evidenced, when they are not.

There is no place in science for changing the probabilities of an experiment because you feel the result you're looking for is likely to be true.
 
Probability is why a fire extinguisher is carried on a boat. Plausibility is why most people do not carry one in their car.
 
Togo, if you think plausibility matters and prior probability does not, then please explain the difference.

I'm not sure defining those terms is helpful. The problem, is that you are taking a mathematical theory about how probability of a chain of events is effected by some of those events being resolved, and using it as an excuse to treat your own a priori beliefs as being evidenced, when they are not.

There is no place in science for changing the probabilities of an experiment because you feel the result you're looking for is likely to be true.
OK, I would like to know where we have common ground, and I would like to be sure that the disagreement is not due to mere misunderstanding or differing definitions of words. Instead of "prior probability," let's say "background knowledge," meaning the vast body of data related to the claim but not relevant enough to qualify as the direct evidence of the claim. Do you take this "background knowledge" to be a relevant part of making judgments of probability of a claim?
 
Making judgements? I guess so, if making gut-judgements about things is what you're trying to do. You can base intuition on all sorts of things. It's not a part of calculating probability, though, nor is anything to do with Bayes' Theorem. And it certainly isn't part of scientific evidence.

Earlier on you were talking about 'raising the bar on science they dislike' as being invalid. Is there a difference between that and what you're suggesting?
 
Making judgements? I guess so, if making gut-judgements about things is what you're trying to do. You can base intuition on all sorts of things. It's not a part of calculating probability, though, nor is anything to do with Bayes' Theorem. And it certainly isn't part of scientific evidence.

Earlier on you were talking about 'raising the bar on science they dislike' as being invalid. Is there a difference between that and what you're suggesting?
It is related. When a claim generally fits what is expected of the related background knowledge, then it is appropriate to set the bar lower. When an argument from the direct evidence exceeds the lower bar, then it should be considered probable. I generally do NOT recommend setting the bar high just because you don't like the politics. The authors I cited recommend setting the bar high because they don't like the politics. They are surprisingly explicit about the political influence on their scientific thinking. I recommend against that policy not just because it encourages delusions but mainly because science progresses with uncertain claims being presented, examined, put to the test, debated, and so on, until they are either rejected or accepted. If all uncertain claims you don't like are kept off the table, then it stops scientific progress in its tracks.
 
Making judgements? I guess so, if making gut-judgements about things is what you're trying to do. You can base intuition on all sorts of things. It's not a part of calculating probability, though, nor is anything to do with Bayes' Theorem. And it certainly isn't part of scientific evidence.

Earlier on you were talking about 'raising the bar on science they dislike' as being invalid. Is there a difference between that and what you're suggesting?
It is related. When a claim generally fits what is expected of the related background knowledge, then it is appropriate to set the bar lower. When an argument from the direct evidence exceeds the lower bar, then it should be considered probable.

Well hang on, you're confusing two different things here.

You're talking about what you consider likely to be the case. That's fine and dandy for political discussions - anyone can have an opinion - but it's not science. It's not really evidence. All you're doing is taking your existing beliefs, and saying that in light of those, some of your other beliefs are more likely to be true.

For example, it's not very likely that Queen Elizabeth II is a lizard-like extra-terrestrial, a theory explained at some length by David Ike. However, if you believe, like Ike does, that lizards have already infiltrated the British royal family and sate their appetite for flesh on commoners and corgies, then it becomes highly plausible that the Queen would be one such lizard. It's implausible that she wouldn't be.

Similarly we had a Christian historian on the boards a few years back who kept arguing that because the Bible was accurate in so many areas that could be tested (historical detail, descriptions of scenes and places, etc.) and have since been confirmed by hard evidence (mostly modern archaeology), the it must be a trustworthy source, and thus is itself evidence for Jesus having risen from the dead, the miracle of loaves and fishes, and God in general.

What you're arguing is that because there is hard evidence for differences between groups based on skin colour, intelligence must be unitary, genetic, and vary by racial decent. As with God and the lizards above, that represents a failure of logic. Evidence for something consistent with your belief is not evidence for your belief. Not even a little bit.

Obviously, within science the criteria for proof is very strict, and you're nowhere near that. But you're not meeting even lesser standards of evidence here. The problem, that there is no direct connection between the conclusion you want to reach and the evidence you're basing it on, isn't going to go away.

[The authors I cited recommend setting the bar high because they don't like the politics. They are surprisingly explicit about the political influence on their scientific thinking.

It could be that they're applying the policy test, common in medicine, but also used in areas of public policy. Basically, you take an unusually high standard for any measure which, if implemented as public policy, could cause a great deal of harm if it turned out to be incorrect. This was famously applied to autism studies, where initial findings that autism could be linked to measles vaccine, were ruthlessly suppressed, on the grounds that scaring people away from measles vaccine would kill a lot of people, mainly children. It's routinely used in developmental psychology, medicine, bomb and fire safety, and so on.

Or it could be that they deserve your criticism. The strength of your position is not effected either way.

I recommend against that policy not just because it encourages delusions but mainly because science progresses with uncertain claims being presented, examined, put to the test, debated, and so on, until they are either rejected or accepted. If all uncertain claims you don't like are kept off the table, then it stops scientific progress in its tracks.

But that's science, not public policy. In public policy you're urging people to adopt measures as if you were right, ahead of any scientific evidence that would demonstrate this. As for science, the easiest way to suppress the free exchange of ideas is to relentlessly push a single conclusion in the absence of evidence, and try and smear your opponents as politically motivated. And the easiest way to encourage it is to keep a firm hand on the evidence and what it can or can not demonstrate, and make sure that speculation beyond that is labelled as such. Which works fine unless you then try and introduce such speculation into the public policy arena by erroneously mistaking it for scientific fact/
 
Back
Top Bottom