• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Experimental Statistics takes a sucker punch

Rhea

Cyborg with a Tiara
Staff member
Joined
Jan 31, 2001
Messages
14,973
Location
Recluse
Basic Beliefs
Humanist
This article certainly throws a molotov cocktail into experiemental statistics.


Top 10 ways to save science from its statistical self
Null hypothesis testing should be banished, estimating effect sizes should be emphasized



Statistics is to science as steroids are to baseball. Addictive poison. But at least baseball has attempted to remedy the problem. Science remains mostly in denial.

True, not all uses of statistics in science are evil, just as steroids are sometimes appropriate medicines. But one particular use of statistics — testing null hypotheses — deserves the same fate with science as Pete Rose got with baseball. Banishment.

The article (which is part two in a series) opines on the following:

10. Ban P values
9. Emphasize estimation
8. Rethink confidence intervals
7. Improve meta-analyses
6. Create a Journal of Statistical Shame
5. Better guidelines for scientists and journal editors
4. Require preregistration of study designs
3. Promote better textbooks
2. Alter the incentive structure
1. Rethink media coverage of science


I haven’t read the whole article yet, but their discussion on the null hypothesis and p-values (and the banning thereof by congressional legislation) piqued my interest and thought would be interesting to discuss.
 
I would personally like to see "7. Improve meta-analyses" changed to "abolish meta-analyses". Especially in the soft sciences, doing a meta-analyses of fifty papers generally informs on what 'hot new theory' is currently receiving funding, not necessarily whether this 'hot new theory' has any merit or if it is just the latest fad in thinking on the subject.

Pick a period when Freudian psychology was the "hot new theory" and a meta-analyses of thirty papers would lead one to a very different conclusion than a meta-analyses of thirty papers written today if there is some other "hot new theory" psychology research is embracing.
 
There is a small paperback that may still be in print How To Lie With Statistics.

Statistics applied to experiment and observation are a tool, not absolute in nay sense.

A statistical conclusion is never absolute. In good analysis there is always a confidence level or interval.

In sampling statistics it is proportional to sample size, 100% sampling is absolute. A sample of 10 out of 100,000 has a low confidence level or probability of being correct.

Sounds like an anti science rant from the pseudo science crowd.
 
People are opportunistic. All people are opportunistic. Where's there's a means there's a method. and other sniding things.

Statistics are as good as the user. A reputable user always tests models for potential bias and accounts for them in publication.

yeah, steve_bank some irate non-professional tried to use professional tools, got burned, now complains.

Reputable journals require justification of statistics, analysis, as requirement for consideration of publication.

Remember the cold fusion thing back in the day?
 
People are opportunistic. All people are opportunistic. Where's there's a means there's a method. and other sniding things.

Statistics are as good as the user. A reputable user always tests models for potential bias and accounts for them in publication.

yeah, steve_bank some irate non-professional tried to use professional tools, got burned, now complains.

Reputable journals require justification of statistics, analysis, as requirement for consideration of publication.

Remember the cold fusion thing back in the day?

I remember, it was covered up by the coal and oil industry because it would repo laced fossil fuels....

I have referenced cold fusion as a good example of how science is deposed to work and usually does. Within around 48 hours it was debunked.
 
I have referenced cold fusion as a good example of how science is deposed to work and usually does. Within around 48 hours it was debunked.

No one was able to reproduce the claims made even though several universities duplicated the set-up and procedure trying to. Sadly, Fleischmann and Pons continued on for years trying to demonstrate that it was real. It is a mental trap many people have fallen into... they believe the results they want is there but it is buried in the noise. Because they believed, Fleischmann and Pons 'saw' the signal but those at the universities trying to duplicate the tests only saw noise.

I understand that there are others trying to find a way to create a cold fusion process. It is an appealing dream. I read a while ago that Google was even funding some research.
 
Back
Top Bottom