• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The softer the science, the more it gets positive results

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,071
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
PLOS ONE: “Positive” Results Increase Down the Hierarchy of the Sciences by Daniele Fanelli

Abstract:
The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a field's level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.

The hierarchy of the sciences:
Physical sciences
Biological sciences
Social sciences

This may explain "physics envy", because the physical sciences seem to be the strongest sciences.

... in some fields of research (which we will henceforth indicate as “harder”) data and theories speak more for themselves, whereas in other fields (the “softer”) sociological and psychological factors – for example, scientists' prestige within the community, their political beliefs, their aesthetic preferences, and all other non-cognitive factors – play a greater role in all decisions made in research, from which hypothesis should be tested to how data should be collected, analyzed, interpreted and compared to previous studies.

In one study, 222 scholars were asked to rate several academic disciplines by similarity. An analysis revealed three axes of variation:
Hard - soft
Pure - applied
Life - non-life

There's support for a hard - soft axis of variation from studies of lots of features, like number of colleagues acknowledged per paper, immediacy of references, and even the fraction of paper area dedicated to graphs.

There are some differing views:
The social sciences cannot be objective
The natural sciences and the social sciences work much alike
They are all socially-constructed intellectual fashions

An intermediate position would be to distinguish between a "core" and a "frontier" of a field. The frontiers of different fields may be much alike, while the cores may be very different. If the contents of advanced university textbooks are any guide, the cores are indeed different, with the physical sciences being much more structured and developed than the social sciences.

This research examined the frontiers of the physical, biological, and social sciences, and indeed found differences associated with hardness -- researchers in the harder sciences tended to report more negative results, after correcting for various other factors. However, the difference was a matter of degree, not kind.

In his discussion, DF considers an odd conundrum: results in the physical sciences are typically much stronger statistically than results in the biological and social ones. So why do they get more negative results?

One factor that DF may not have adequately addressed was how easily a negative result can be given a positive spin, like an upper limit. The Particle Data Group, which is a detailed reference in particle physics, has numerous upper limits on decay rates and cross sections, and numerous lower limits on masses. Thus, instead of saying that "We did not observe a  neutron electric dipole moment", one would say "We observed that the neutron's electric-dipole moment is at most 2.9*10^(-26) e*cm".

What sort of hypotheses are they testing?
Younger, less developed fields of research should tend to produce and test hypotheses about observable relationships between variables (“phenomenological” theories). The more a field develops and “matures”, the more it tends to develop and test hypotheses about non-observable phenomena underlying the observed relationships (“mechanistic” theories). These latter kinds of hypotheses reach deeper levels of reality, are logically stronger, less likely to be true, and are more conclusively testable.

Then there is the question of how rigorous the analysis procedure is. Experimenter effects happen all over science, but they've been documented extensively in the behavioral sciences. DF suggests that researchers' will to believe could be an important part of the difference.
 
I'm not surprised. I've been very leery of "research" in the soft sciences for some time now.
 
I'm not surprised. I've been very leery of "research" in the soft sciences for some time now.

Ironically, the OP study published in a generally "hard science" journal is an example of rather poorly done "soft" science. Their hypothesis does not actually follow from their underlying theory they are claiming to test, thus support for the hypothesis doesn't provide evidence for the theory. The hypothesis is the "softer" sciences would have a higher rate of positive results. The theory they claim gives rise to this hypothesis is that the softer sciences are more influenced by bias and less by actual data. But that theory does not predict more positive results. Pushing some biased agenda, whether motivated by emotion, ideology, or professional self-promotion can be done equally and often moreso by putting forth negative results showing that other colleagues are wrong. So, greater influence of bias would predict either nothing or a null result for their observed variables, yet the authors just make up something that allows them to conclude that the pattern of results is positive support for their theory.

In addition, the observed result are predicted by and explained by multiple theories other than the one the authors promote. First, there is the issue of publication bias. This is not a bias in favor of any theories, but rather a bias towards not publishing null results or negative results that make no theoretical sense. There are few journals in the soft sciences that allow null results to be published, and this has nothing to do with no wanting to challenge existing theories but rather is rooted in the assumption that null results are not negative evidence against a hypothesis but rather are meaningless and uninterpretable. This is rooted in the idea that because methods for measuring the theoretical variables are so indirect and and influenced by so many factors that null results are highly likely, even if the theory in question is true. Therefore, null results are not evidence that the theory is incorrect but just the lack of evidence either way. IOW, researchers recognize the "softness" of the methods needed to measure the variables and thus take a conservative approach to avoid over-interpretation of null results.

Another factor producing more positive findings in the soft sciences is that a true "negative" result where there is a statistically significant effect but in the exact opposite of the predicted direction is unlikely, given that comparisons are only made when there is a sound theoretical basis to predict the direction of the effect. The number of variables you can measure on a person are endless and the number of situations you can put them in that might effect those variables are endless. One needs to have a pretty large body of prior information for any variable-context combination to even come to mind as something worth asking a question about. Keep in mind that every researcher has not only the prior literature at their disposal, but a lifetime of experience that is relevant to theories of human psychology and behavior. Personal experience is far from ideal but it isn't random and its more likely than not to correspond to the reality of a psychological question. That is why people often say "no duh" when hearing psychological findings. Beat the shit out of kid for 18 years and they have a bunch of problems. People knew that prior to formal research based on common sense and personal experience. The odds of getting a true negative results once formal research tested the claim was almost zero. In contrast, there are countless questions that would come to mind to ask in chemistry for which common sense and personal experience don't in any way apply and for which prior research doesn't give a clear strong prediction. Thus, the odds that the results could show some effect in either direction are much higher.

In psychology and other soft science, a true negative result in the opposite direction of expected is only going to happen if either all that prior information was wrong and implied the opposite of what is true, or the methods used were faulty and didn't measure or control the variables as intended. The former reason is why we should not expect many true negative results but mostly positive or merely null results. The latter reason is why when true negative results do occur there is skepticism that the reason could be poor methods given the inherent room for error in the measure of most psychological variables. IF there is no plausible explanation for why the results came out that way, then it is treated like atheoretical data that wasn't predicted a priori and thus is more likely than predicted results to be the product of unknown methodological errors and randomness. We can critique this reasoning but it still reflects an honest effort to engage in scientific reasoning and when the measurement and manipulation methods are "soft", then the data does not speak for itself, but rather its meaning, interpretation, and validity is subject to question and interpretation in light of coherence with what is already known, and data without clear interpretation tends to be rejected for publication because its tough to sell journals to cover publication costs if publishing data that no one knows what it implies.
 
Not convinced by this. They're explicitly taking the number of positive results as a measure of bias. In doing so they ignore the criteria that would determined whether a study gets published. Negative results in an established field are more interesting - you're generally in a position to predict what will give a positive result, so a negative result is interesting and meaningful. It might imply an absence of an expected effect, it might demonstrate that a particular set up of equipment or conditions is flawed. In a more 'frontier science', or when dealing with humans and human behaviour, a negative result is often worthless. It doesn't imply an unexpected result, because it's much more likely that one of the many many confounding variables swamped your results. So a negative result won't show up in a journal to be picked up in this meta analysis, because it wouldn't get published in the first place. It may not even be submitted for publication.

For example a colleague of mine spent a year and half teaching rats to navigate to food based on vertical or horizontal lines in the environment, compared to similar lines on objects. The idea was then to lesion the brain and work out how the striate cortex was involved in navigation. Unfortunately his finding was that rats don't care about wallpaper, and entirely ignore it no matter how stripy it is. No one was interested in publishing that result, so it disappeared without trace. The following year he started again, switching to ridged boxes, and had more success, and got published.

His unwillingness to stretch his earlier results to fit a positive result would appear in the meta-analysis as a massive bias towards positive results. So, yeah, not convinced.

Also, these results are very strong. Bias results tend not to be. The chances of this being purely a human choice effect seem low to me. Much more likely they've hit a feature of how science is reported.
 
One should find even better hierarchical standings when going from sciences with global theories to sciences with partial theories to sciences with situation theories to sciences with no established theories. Biology got a big step up with the advent of ToE else it would still be muddling around with people dying in efforts to demonstrate things like drugs and anesthesia.
 
Back
Top Bottom