lpetrich
Contributor
PLOS ONE: “Positive” Results Increase Down the Hierarchy of the Sciences by Daniele Fanelli
Abstract:
The hierarchy of the sciences:
Physical sciences
Biological sciences
Social sciences
This may explain "physics envy", because the physical sciences seem to be the strongest sciences.
In one study, 222 scholars were asked to rate several academic disciplines by similarity. An analysis revealed three axes of variation:
Hard - soft
Pure - applied
Life - non-life
There's support for a hard - soft axis of variation from studies of lots of features, like number of colleagues acknowledged per paper, immediacy of references, and even the fraction of paper area dedicated to graphs.
There are some differing views:
The social sciences cannot be objective
The natural sciences and the social sciences work much alike
They are all socially-constructed intellectual fashions
An intermediate position would be to distinguish between a "core" and a "frontier" of a field. The frontiers of different fields may be much alike, while the cores may be very different. If the contents of advanced university textbooks are any guide, the cores are indeed different, with the physical sciences being much more structured and developed than the social sciences.
This research examined the frontiers of the physical, biological, and social sciences, and indeed found differences associated with hardness -- researchers in the harder sciences tended to report more negative results, after correcting for various other factors. However, the difference was a matter of degree, not kind.
In his discussion, DF considers an odd conundrum: results in the physical sciences are typically much stronger statistically than results in the biological and social ones. So why do they get more negative results?
One factor that DF may not have adequately addressed was how easily a negative result can be given a positive spin, like an upper limit. The Particle Data Group, which is a detailed reference in particle physics, has numerous upper limits on decay rates and cross sections, and numerous lower limits on masses. Thus, instead of saying that "We did not observe a neutron electric dipole moment", one would say "We observed that the neutron's electric-dipole moment is at most 2.9*10^(-26) e*cm".
What sort of hypotheses are they testing?
Then there is the question of how rigorous the analysis procedure is. Experimenter effects happen all over science, but they've been documented extensively in the behavioral sciences. DF suggests that researchers' will to believe could be an important part of the difference.
Abstract:
The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a field's level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.
The hierarchy of the sciences:
Physical sciences
Biological sciences
Social sciences
This may explain "physics envy", because the physical sciences seem to be the strongest sciences.
... in some fields of research (which we will henceforth indicate as “harder”) data and theories speak more for themselves, whereas in other fields (the “softer”) sociological and psychological factors – for example, scientists' prestige within the community, their political beliefs, their aesthetic preferences, and all other non-cognitive factors – play a greater role in all decisions made in research, from which hypothesis should be tested to how data should be collected, analyzed, interpreted and compared to previous studies.
In one study, 222 scholars were asked to rate several academic disciplines by similarity. An analysis revealed three axes of variation:
Hard - soft
Pure - applied
Life - non-life
There's support for a hard - soft axis of variation from studies of lots of features, like number of colleagues acknowledged per paper, immediacy of references, and even the fraction of paper area dedicated to graphs.
There are some differing views:
The social sciences cannot be objective
The natural sciences and the social sciences work much alike
They are all socially-constructed intellectual fashions
An intermediate position would be to distinguish between a "core" and a "frontier" of a field. The frontiers of different fields may be much alike, while the cores may be very different. If the contents of advanced university textbooks are any guide, the cores are indeed different, with the physical sciences being much more structured and developed than the social sciences.
This research examined the frontiers of the physical, biological, and social sciences, and indeed found differences associated with hardness -- researchers in the harder sciences tended to report more negative results, after correcting for various other factors. However, the difference was a matter of degree, not kind.
In his discussion, DF considers an odd conundrum: results in the physical sciences are typically much stronger statistically than results in the biological and social ones. So why do they get more negative results?
One factor that DF may not have adequately addressed was how easily a negative result can be given a positive spin, like an upper limit. The Particle Data Group, which is a detailed reference in particle physics, has numerous upper limits on decay rates and cross sections, and numerous lower limits on masses. Thus, instead of saying that "We did not observe a neutron electric dipole moment", one would say "We observed that the neutron's electric-dipole moment is at most 2.9*10^(-26) e*cm".
What sort of hypotheses are they testing?
Younger, less developed fields of research should tend to produce and test hypotheses about observable relationships between variables (“phenomenological” theories). The more a field develops and “matures”, the more it tends to develop and test hypotheses about non-observable phenomena underlying the observed relationships (“mechanistic” theories). These latter kinds of hypotheses reach deeper levels of reality, are logically stronger, less likely to be true, and are more conclusively testable.
Then there is the question of how rigorous the analysis procedure is. Experimenter effects happen all over science, but they've been documented extensively in the behavioral sciences. DF suggests that researchers' will to believe could be an important part of the difference.