NobleSavage
Veteran Member
This is a really good read:
http://www.psmag.com/navigation/hea...entists-save-themselves-human-behavior-78858/
http://www.psmag.com/navigation/hea...entists-save-themselves-human-behavior-78858/
This is a really good read:
http://www.psmag.com/navigation/hea...entists-save-themselves-human-behavior-78858/
Unlike Sokal’s attack, the current critique of experimental social science is coming mainly from the inside.
from the article said:Simmons and his colleagues duly reported their adjustment in their mock paper—but they left out any mention of all the other factors that they had tried and discarded. This was all, Simmons emphasizes, within the bounds of what is considered fair play in most psychology departments and journals.
BTW, on the issue of fraud, studies have shown that Medicine is the field with the most research fraud.
from the article said:Simmons and his colleagues duly reported their adjustment in their mock paper—but they left out any mention of all the other factors that they had tried and discarded. This was all, Simmons emphasizes, within the bounds of what is considered fair play in most psychology departments and journals.
The article makes some good points, but the above part is bullshit. It is not at all considered "fair play" within psychology to conduct multiple analysis and report only some without making a correction to the criterion p-value used to determine "significance". It is considered fraud and this is highly emphasized in the multiple statistics courses that all psychology grad students take. In fact, psych grad students receive more training in this and related stats issues than scientists in most hard sciences. Journals regularly require researchers to make alpha corrections to results to account for the inflated Type I errors that occur with multiple comparisons.
The only reason it is a bigger issue in psychology is that psychology inherently has to deal with smaller effect sizes because every human behavior is so complexly determined by countless interacting factors that experimentally manipulating a single factor with the kind of weak manipulations you can use in a lab in 60 minutes is likely to lead to only small effects, even if the variable is very much a causal factor in the behavior and has important real world impact.