• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Toward a Judeo-Marxist biology

What is their evidence?

Do you have any idea what a buzz-kill this constant mewling about evidence is? We are discussing ideas here, concepts, methods of approach. Sure there's evidence. What do most people think about? Material objects: money, pleasure, fame. What do the few think about? Ideas: relationships, motives, causes, effects. What do the elect think about? The One. Verify? Ask them, listen to what they say, read what they wrote.

Sigmund Freud was a very imaginative storyteller, but not much more than that. I remember someone claiming about him that he had no idea of how to test a hypothesis, and that seems very likely.

So, science but only your science. Look, the science of man is in its infancy. Freud was one of the first to start in on it. Of course, Spinoza provides everything we need for a true understanding of man.

That is indeed correct in many cases, but there are also a lot of  Clonal colony - lots of organisms reproduce by a sort of budding, like plants sending out runners or making shoots from their roots.

Sure, there are multicellular organisms that have no meiosis, only mitosis. I was merely pointing out that with regard to reproduction, the difference between a clonal colony and a human is merely the difference between mitosis and meiosis.
 
What is their evidence?
Do you have any idea what a buzz-kill this constant mewling about evidence is? ...
It's better than making baseless assertions.
Look, the science of man is in its infancy. Freud was one of the first to start in on it. Of course, Spinoza provides everything we need for a true understanding of man.
Spinoza??? He lived over 350 years ago. What did he discover that present-day psychologists are only catching up on?
 
True science proceeds from ideas to verification, from the abstract to the concrete. The common man denies this, denies the reality of ideas and affirms only material objects.
 
True science proceeds from ideas to verification, from the abstract to the concrete. The common man denies this, denies the reality of ideas and affirms only material objects.
Bull doo-doo.

The practice of empirical science is an interplay between theory and observation and experiment. I say "empirical" to leave out mathematics, which is pure theory.
 
“Positive” Results Increase Down the Hierarchy of the Sciences | PLOS ONE
The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a field's level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.
Something that that paper did not discuss: the possibility of giving a positive spin on a negative result, like an upper limit on some effect. But that is still related to "hard" vs. "soft", where on the "hard" side, one can come up with very strong theories.

Noting The characteristics of subject matter in different academic areas. - PsycNET - PDF at Biglan_-_1973_-_THE_CHARACTERISTICS_OF_SUBJECT_MATTER_IN_DIFFERENT_ACADEMIC_AREAS.pdf
Author Anthony Biglan requested assessments of degree of similarity of different fields of study in academia, and he then did some  Multidimensional scaling to find which positions in some abstract space would have the distances that he found from his informants. He then did  Principal component analysis on those points, and he found three axes:
  • Hard - soft
  • Pure - applied
  • Life - nonlife
He checked his calculations by randomly dividing his data in two and doing those calculations on each half. He found consistent results.
 
More:
Numerous studies have taken a direct approach, and have attempted to compare the hardness of two or more disciplines, usually psychology or sociology against one or more of the natural sciences. These studies used a variety of proxy measures including: ratio of theories to laws in introductory textbooks, number of colleagues acknowledged in papers, publication cost of interrupting academic career for one year, proportion of under 35 s who received above-average citations, concentration of citations in the literature, rate of pauses in lectures given to undergraduates, immediacy of citations, anticipation of one's work by colleagues, average age when receiving the Nobel prize, fraction of journals' space occupied by graphs (called Fractional Graph Area, or FGA), and others. According to a recent review, some of these measures are correlated to one-another and to the HoS. One parameter, FGA, even appears to capture the relative hardness of sub-disciplines: in psychology, FGA is higher in journals rated as “harder” by psychologists, and also in journals specialised in animal behaviour rather than human behaviour.
But is there really a hierarchy? Some people have maintained that *all* scientific theories are intellectual or ideological fashions, and there is some truth to that, especially in the cutting edge of research.
Several lines of evidence support a non-hierarchical view of the sciences. The consensus between scientists within a field, measured by several independent parameters including level of agreement in evaluating colleagues and research proposals, is similar in physics and sociology. ... Historical reconstructions show that scientific controversies are common at the frontier of all fields, and the importance and validity of experiments is usually established in hindsight, after a controversy has settled.
Langmuir's talk on Pathological Science - delivered in 1953 - research going astray. His symptoms:
  1. The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
  2. The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
  3. Claims of great accuracy.
  4. Fantastic theories contrary to experience.
  5. Criticisms are met by ad hoc excuses thought up on the spur of the moment.
  6. Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.
He included several physical-science effects that turned out to be bogus, effects like N-rays, something discovered not long after X-rays. While X-rays violated (1) and (2) very quickly, N-rays stayed borderline observable, with effects that only existed in the imaginations of those who expected to see them. Some mainstream-science theories satisfy (4) rather grossly, but they are accepted because they also grossly violate (1) and (2). Quantum mechanics is an obvious example, but Newtonian mechanics also has counterintuitive features.
 
Back to Daniele Fanelli's PLOS One paper on the hierarchy of the sciences.
The contrast between indirect measures of hardness, which point to a hierarchy, and evidence of high controversy and disagreement in all kinds of research has inspired an intermediate position, which distinguishes between the “core” and the “frontier” of research. The core is the corpus of agreed upon theories and concepts that researchers need to know in order to contribute to the field. Identifiable with the content of advanced university textbooks, the core is clearly more developed and structured in the physical than in the social sciences

"When correcting for the confounding effect of presence/absence of multiple hypotheses, the odds of reporting a positive result were around five times higher for papers published in Psychology and Psychiatry and Economics and Business than in Space Science."

Though, "Not all observations matched the predicted hierarchy, however." then listing some counterexamples.
The probability of a paper to report a positive result depends essentially on three components: 1-whether the tested hypothesis is actually true or false; 2-logical and methodological rigour with which the hypothesis is linked to empirical predictions and tested; 3-statistical power to detect the predicted pattern (because low statistical power decreases the probability to reject the “null” hypothesis of no effect).

Statistical power - which is primarily a function of noise in the data and sample size - is typically low in social, behavioural and biological papers, and relatively high in disciplines like physics, chemistry or geology. These latter rarely use inferential statistics at all, either because the outcomes are “yes” or “no” answers (e.g. presence or absence of specific chemical compound in a rock) or because their data have such low levels of noise to make any pattern unmistakable. Based on statistical power alone, therefore, the physical sciences should yield as many or more positive results than the other sciences, which should report more “null” results instead.
Some physical-science fields have relatively low statistical power, like detection of exoplanets and discovery of elementary-particle effects, and that's likely the case for other cutting-edge research.
 
Truth value has sub-hypotheses "Prior knowledge and beliefs" and "Deepness of hypotheses tested".

Prior knowledge of something increases the chances of success. On deepness of hypotheses,
Younger, less developed fields of research should tend to produce and test hypotheses about observable relationships between variables (“phenomenological” theories). The more a field develops and “matures”, the more it tends to develop and test hypotheses about non-observable phenomena underlying the observed relationships (“mechanistic” theories). These latter kinds of hypotheses reach deeper levels of reality, are logically stronger, less likely to be true, and are more conclusively testable

Rigor has sub-hypotheses "Flexibility in definitions, design, analysis and interpretation of a research" and "Prevalence and strength of experimenter effects and self-fulfilling prophecies" and "Non-publication of negative and/or statistically non-significant results" and "Prevalence and strength of manipulation of data and results"

Flexibility?
In its earliest stages of development, a discipline or field can be completely fragmented theoretically and methodologically, and have different schools of thought that interpret the same phenomena in radically different ways – a condition that seems to characterize many fields in the social sciences and possibly some of the biological sciences.

Experimenter effects are very well-known, and N-rays are just one example of such effects.
The biasing effect of researchers' expectations is increasingly recognized in all disciplines including physics, but has been most extensively documented in the behavioural sciences. Indeed, behavioural data, which is inherently noisy and open to interpretation, might be particularly at risk from unconscious biases. Behavioural studies on people have an even higher risk of bias because the subjects of study can be subconsciously aware of researchers' expectations, and behave accordingly. Therefore, experimenter effects might explain why behavioural studies yield more positive results on humans than non-humans.

Negative and non-significant results? "These can remain unpublished because researchers prefer not to submit them and/or because journal editors and peer reviewers are more likely to reject them."

No discussion of the possibility of putting a positive spin on negative results.

Outright fakery?
Several factors are hypothesised to increase scientists' propensity to falsify research, including: the likelihood of being caught, consequences of being caught, the costs of producing data compared to publishing them, strong belief in one's preferred theories, financial interests, etc… Survey data suggests that outright scientific misconduct is relatively rare compared to more subtle forms of bias, although it is probably higher than commonly assumed, particularly in medical/clinical research.
 
"Papers testing multiple hypotheses were more likely to report a negative support for the first hypothesis they presented. This suggests that the order in which scientists list their hypotheses follows a rhetorical pattern, in which the first hypothesis presented is falsified in favour of a subsequent one. ... However, there was no statistically significant difference between disciplines or domains and large differences could be excluded with significant confidence, which suggests that the rhetorical style is similar across disciplines."

I concede that I've done that myself.
 
Preregistering is now common, and it's a way of getting around the biases that I'd mentioned earlier.

More and more scientists are preregistering their studies. Should you? | Science | AAAS - "Declaring in advance what you're going to study, and how, helps avoid p-hacking and publication bias"

Starting off with a published paper with a nonsensical result.
In the years before, the trio had slowly lost faith in the stream of neat findings in psychology. "Leif, Uri, and I all had the experience of reading papers and simply not believing them," Simmons says. It seemed unlikely that it could all be down to fraud. "After much discussion, our best guess was that so many published findings were false because researchers were conducting many analyses on the same data set and just reporting those that were statistically significant," they recently wrote. They termed the behavior, which also gave them their nonsense result, "p-hacking," a reference to the p-value, which determines whether a result is considered significant.

...
The crescendo of problems has led some psychologists to adopt a radical solution: describing the research they plan to do, and how, before they gather a single piece of data. Preregistration, in its simplest form, is a one-page document answering basic questions such as: What question will be studied? What is the hypothesis? What data will be collected, and how will they be analyzed? In its most rigorous form, a "registered report," researchers write an entire paper, minus the results and discussion, and submit it for peer review at a journal, which decides whether to accept it in principle. After the work is completed, reviewers simply check whether the researchers stuck to their own recipe; if so, the paper is published, regardless of what the data show.

Preregistration had already become the norm in clinical trials as a way to prevent publication bias, the tendency for many negative results to remain unpublished. Now, it is spreading through psychology and into other fields, not just to ensure those results see the light of day, but also because of a different advantage: By committing researchers to a fixed plan, it takes away some of the degrees of freedom that can skew their work.

...
Too often, a result is presented as if it confirms a hypothesis when researchers are actually doing what has become known as HARKing, he says, "hypothesizing after results are known."

Preregistration - "Preregistration allows researchers to specify and share the details of their research in a public registry before conducting the study."
 Preregistration (science)
 
True science proceeds from ideas to verification, from the abstract to the concrete. The common man denies this, denies the reality of ideas and affirms only material objects.
Bull doo-doo.

The practice of empirical science is an interplay between theory and observation and experiment. I say "empirical" to leave out mathematics, which is pure theory.

It is interesting to me that during the very early 20th century, so much was discovered by accident in laboratories.Starting with the Michaelson Morely experiments.

Shut up and experiment!
 
Evolution, like gravity, is a fixed process. It is determined, conscious and realizes specific goals and ends.
 
Sigh...

Evolution has no goals or purpose or any active agents. It just is.
 
Evolution, like gravity, is a fixed process. It is determined, conscious and realizes specific goals and ends.

Evidence?

All the evidence says otherwise. See: nylon-eating bacteria, for one of many striiking examples of the utter lack of telos in evolution. Humans are an accident of a contingent and largely stochastic process. That’s what the evidence shows.
 
Y'all are free to operate on the basis of your assumptions, as I am on mine. We'll see who falls off the edge of the Earth and who doesn't.
 
Y'all are free to operate on the basis of your assumptions, as I am on mine. We'll see who falls off the edge of the Earth and who doesn't.

We don’t operate on assumptions, we operate on evidence. You operate on assumptions without evidence.
 
You operate on the basis of "fitness", "competition", "deep time", "advantageous mutation", "descent", "favoured races", "selection". Pure fantasy.
 
Back
Top Bottom