• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Is psychology a pseudoscience?

Perspicuo

Veteran Member
Joined
Jan 27, 2011
Messages
1,289
Location
Costa Rica
Basic Beliefs
Empiricist, ergo agnostic
Scientists tried to replicate 100 psychology experiments and 64% failed
http://www.sciencealert.com/scientists-tried-to-replicate-100-psychology-experiments-and-64-failed

A landmark study involving 100 scientists from around the world has tried to replicate the findings of 270 recent findings from highly ranked psychology journals and by one measure, only 36 percent turned up the same results. That means that for over half the studies, when scientists used the same methodology, they could not come up with the same results.

Can psychology be trusted?
 
This doesn't necessarily mean anything unless psychology as a field routinely declares first results to be final; which I don't think it does.

Incidentally, the article is rather confusing. They mention trying to replicate 270 findings, but then talks about only 100 studies. Granted, a single study can have more than one results, but it is still confusing journalism.
 
Every branch of science starts as empirical observation and classification. Psychology is still in that phase.

A friend of mine told me her daughter had been diagnosed with Oppositional Defiance Disorder. What the fuck is that? It means she is a teenage girl who doesn't want to do what she is told to do. It might be something she would like to do, but not if told to do it.

This is not a diagnosis, it's a label. Europeans were still in the observe and label phase of most sciences, well into the 19th century, when geologists were still trying to fit their data into the Biblical account of creation.

It's not really fair to call psychology a pseudoscience. It's more of an immature science.
 
Bronzeage,
Psychiatric diagnoses may fall within the ambit of psychology*, but were not created by psychology -they were created by psychiatry and thus respond to the perceived societal demands upon that medical specialization.

I agree that psychiatric DSM-type diagnoses are protoscientific (if not pseudoscientific).



__________
* Because they are pigeonholes for clusters of behaviors, and therefore may serve the purpose of setting up research populations to compare.
 
It doesn't prove that psychology is a pseudoscience, but it should be a wake-up call. There is a systemic problem that I think leads to such bad science being published and promoted: scientific journals are biased in favor of unusual results and biased against "redundant" results seen as not contributing to the existing body of knowledge. I have heard that kind of criticism against "redundant" papers, and I always cringe. I generally don't trust results of any study unless they are redundant. It is certainly not a problem unique to psychology. Scientific journals are not exceptional sources of truth, but they are like other publishing businesses: interested in sales.
 
538 had a good article on 'bad science' recently. The idea is that scientists are being given the wrong incentives, the focus being on something other than accurate science.

I don't think psychology is a pseudoscience, but I don't think the body of psychological knowledge is nearly as strong as it will be 100 years from now.
 
538 had a good article on 'bad science' recently. The idea is that scientists are being given the wrong incentives, the focus being on something other than accurate science.

I don't think psychology is a pseudoscience, but I don't think the body of psychological knowledge is nearly as strong as it will be 100 years from now.

I'm confident it's methods will change too. Will it be stronger? Probably not. Scientific method is flawed. They've suggested one fix I think important. Submit null and experimental hypotheses before data is collected. Still, peer review is a bureaucracy which is a social problem. I don't think Facebook is an answer for that though.
 
Regardless of the field, once people start drawing strong conclusions from single isolated studies, they are engaged in pseudo-science. Science only exists as an iterative process and via replication and verification.

Oh, and the OP article is itself a psychology research paper contributed by hundreds of psychologists using standard psychological methods. So, it is somewhat absurd to use this article as scientific evidence that the field that is actually represented by this article itself is pseudo-science.

Psychology deals with phenomena that are hard to measure, and that are multi-causal and highly sensitive to contextual variations. Measurement error and contextual dependence means that most relationships, whether correlational or causal are going to be weak and require very large sample sizes to show them reliably.

The article shows that the biggest predictor of replication was the original p-value and effect size (which are highly related to each other). IOW, the smaller original effects (whose p-values being "significant" are most sensitive to small sample sizes) were less likely to replicate.

The article also notes that the p < .05 cutoff is utterly arbitrary. If the original study had a p = .04 and the replication a p = .06, to conclude a replication failure is idiotic. It makes more sense to compare the two results directly to each other rather to compare each to a null hypothesis of 0. When they did this, they found that 47.8% of the original findings were within the 95% confidence interval of the replication results, meaning they didn't differ from each other.

Then you have the issue of type of statistical test. Causal interactions between multiple variables are very common in Psychology. However, they are also nortoriously hard to demonstrate statistically. Requiring far more power. Much work in the "hard sciences" rarely deal with stats beyond a simple t-test, where group A and group B are compared. An interaction requires showing that Group A and Group B differ in one context, but differ in a different way in another context. Even when the pattern of data support this, you often don't have the power to show this more complex pattern differs from a simpler pattern. The article shows that the results with the simpler type of main effect group difference were twice as likely to replicate as interactions.

They don't compute it, but if you combine these two things, and only look at whether a simple group difference (of the sort most common in other sciences) was the same as the difference in the replication (rather than comparing each result to the null), you'd wind up with something closer to 65% "replication".

That's not that bad. Also, remember that a failed replication does not mean the original study is the one with the wrong result. Some of the failed replications are false negatives.

IOW, a more careful reading shows the findings are not nearly as damning for psychology as the headlines would suggests. Ironically, "Science" magazine only published this article because it would garner media hype an increase. sales. They are part of the problem. They don't usually ever publish replications, and require authors to make as extreme and "surprising" conclusions as they think they can get away with.

All that said, psychology does need to deal with various issues
In addition to the publication pressure for theoretical novelty over replication, there is a strong bias in the field against publishing null results. This is not for nefarious reasons but the misguided notion that null results are uninformative base on the fallacy that "absence of evidence is not evidence of absence". That notion is bullshit. When a theory predicts that you should find X in a given situation, then so long as a valid effort to find it was made, the failure to find it is evidence that it isn't there and thus the theory is wrong.

This bias against null results is what leads to the "file drawer" problem. The consequence is that for many "significant" results, there may have been several null results that preceded it, but never got published or even submitted. If reviewers knew about those null results they would set a higher bar for publishing the significant one, perhaps requiring multiple studies and replication within the same paper, or at least some viable account of why this study found a result that the others did not. The anti-null result bias also means failed replications don't get published. This article is actually a clever way to get null results published.
The article represents a larger trend in the field toward creating on-line repositories for null results and failed replications, plus giving less weight to arbitrary p-value cutoffs in data interpretation.

To me, the most interesting finding was that cognitive psychology results were twice as likely to be replicated as social psychology results. I have training in both areas, and this makes perfect sense to me. I shifted more toward cognitive in grad school in large part because to the greater rigor in measure, sensitivity to mechanism, and less politicization of the topics. Social psychology is the area that deal with issues like racism, sexism, sexual abuse, sexual assault, homophobia, and the psychological aspects of many other interpersonal and social problems where emotions are high and ideological walls are even higher.

I would be surprised if some of that difference in failed replication wasn't due to greater non-scientific bias infecting the process. OTOH, social psych also relies heavily upon measurement instruments more open to measurement error, whether self-reports by the participants, or subjective observations by experimenters of people's behaviors that are grouped into qualitative categories, like whether or not an act was "violent".
 
Last edited:
Wow. You took all that to say the further one gets from cause and effect the less likely it is for one to get repeatable results. Welcome to the worlds of RA Fisher, Issac Newton, and CS Sherrington. Still cognitive psychology remains about seven synapses between cause and effect. Psychoacoustics, my PhD area, is usually only about two or three synapses from S to R which is why we get much more consistent results and much more continuous theory progression.

One lives in the world towhich one has been attracted and one must live with consequences of that. To Call all psychology into question because of such as social and cognitive psychology is troubling. That is as it may be. Your analysis of the relation of O or experimental hypothesis to results suffers from those doing the work then defining the experimental question. Fix that first.
 
Wow. You took all that to say the further one gets from cause and effect the less likely it is for one to get repeatable results. Welcome to the worlds of RA Fisher, Issac Newton, and CS Sherrington. Still cognitive psychology remains about seven synapses between cause and effect. Psychoacoustics, my PhD area, is usually only about two or three synapses from S to R which is why we get much more consistent results and much more continuous theory progression.

One lives in the world towhich one has been attracted and one must live with consequences of that. To Call all psychology into question because of such as social and cognitive psychology is troubling. That is as it may be. Your analysis of the relation of O or experimental hypothesis to results suffers from those doing the work then defining the experimental question. Fix that first.

As usual, I cannot understand your incoherent mess.
 
How do the figures compare to other sciences? The article mentions a 50% rate for pre-clinical trials...

QI (http://www.comedy.co.uk/guide/tv/qi/episodes/11/7/), a program dedicated to unusual or remarkable facts, mainly scientific facts, pointed out that the half life of facts in their program is very low, with about a 7% decay rate per year.

Academics refer to an event called the "Half-life of facts", which says that over time half of what you know will be untrue, but you do not know which half. On QI it is estimated that 7% of the things mentioned in this episode will be untrue in a year's time. If you are watching a repeat on Dave even more of the facts mentioned will be untrue. A graph is shown displaying the "Decay of QI So-Called 'Facts'", showing the increasing amount of facts that are wrong in older series, with 60% of facts in Series A perhaps being wrong.

The real issue here may not be that the rate for psychology is very high, but rather than the rate for any science is higher than we might think. I remember setting the school's furnace on fire trying to replicate the finding that Vitamin C is degraded by heat. I pulled out my half-melted glassware from the charred ruins, scraped some orange juice residue off of the side of the vessels and found the vitamin C content to be just fine. That doesn't mean that classic scientific findings are incorrect, but it may mean the design of the experiments on which they were first discovered isn't the most stable.

To me, the most interesting finding was that cognitive psychology results were twice as likely to be replicated as social psychology results. I have training in both areas, and this makes perfect sense to me. I shifted more toward cognitive in grad school in large part because to the greater rigor in measure, sensitivity to mechanism, and less politicization of the topics. Social psychology is the area that deal with issues like racism, sexism, sexual abuse, sexual assault, homophobia, and the psychological aspects of many other interpersonal and social problems where emotions are high and ideological walls are even higher.

That doesn't seem surprising to me. Social psychology deals with subject matters that have a much greater potential for spurious influences on results. And you might expect the results to change over a couple of decades, in any case.
 
Wow. You took all that to say the further one gets from cause and effect the less likely it is for one to get repeatable results. Welcome to the worlds of RA Fisher, Issac Newton, and CS Sherrington. Still cognitive psychology remains about seven synapses between cause and effect. Psychoacoustics, my PhD area, is usually only about two or three synapses from S to R which is why we get much more consistent results and much more continuous theory progression.

One lives in the world towhich one has been attracted and one must live with consequences of that. To Call all psychology into question because of such as social and cognitive psychology is troubling. That is as it may be. Your analysis of the relation of O or experimental hypothesis to results suffers from those doing the work then defining the experimental question. Fix that first.

As usual, I cannot understand your incoherent mess.

No kidding. It starts off strong and then devolves into 'wat' territory pretty quickly. Perhaps it has something to do with having fewer synapses than the rest of us?
 
To me, the most interesting finding was that cognitive psychology results were twice as likely to be replicated as social psychology results. I have training in both areas, and this makes perfect sense to me. I shifted more toward cognitive in grad school in large part because to the greater rigor in measure, sensitivity to mechanism, and less politicization of the topics. Social psychology is the area that deal with issues like racism, sexism, sexual abuse, sexual assault, homophobia, and the psychological aspects of many other interpersonal and social problems where emotions are high and ideological walls are even higher.

That doesn't seem surprising to me. Social psychology deals with subject matters that have a much greater potential for spurious influences on results. And you might expect the results to change over a couple of decades, in any case.

That's a valid point. Much of social Psych deals with things like people's attitudes that change over time and are highly context dependent. Thus, some of the results may not replicate because what was true during the original study is no longer true. OTOH, that reveals a important issue with Social Psych, namely that it rarely identifies any basic properties or mechanisms of human thought and behavior that should be expected to remain fairly constant over time.
As for "spurious" influences, I agree, and that is in large part due to less rigorous measurement of the variables. A prime example being things like self-reports on 1-7 scales of one's own attitudes, beliefs, feelings, etc.. These are real things and they do have impacts, but obviously they do not exist in the head as having some number of units between 1 and 7. Research subjects are forced to take a subjective sense of certainty or intensity in their mental state and translate it into a 1-7 scale. That introduces many sources of random and systematic measurement error that can make results unreliable, especially with small samples.
However, it would not be surprising is their was more bias in social than cognitive. There is very clearly more direct connection between the topics studied and strong views people have on political and ideological issues. In fact, a huge % of social psych studies measure the political and ideological views of the research participants because they are directly relevant to the research question. It is also very common for researchers in particular areas of social psych and clinical to have a strong personal or political interest in what they are studying. For example racial minority social psychologists are far more likely to focus their research on questions around racism than just about any other sub-area.
Such personal investment doesn't guarantee bias, but when combined with methods that leave lots of room for judgment and interpretation, it becomes more likely.
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.

If Meyer's Briggs classified personality types according to the date of one's birth, it would certainly be suspect.

A better question might be, is it science at all. MB is a simple descriptive. A person answers a series of questions and the pattern of answers puts them in a particular category. The results are easily manipulated by a person who is aware of the different profiles, so the results could be suspect. How valid could any data be, if the data is able to alter the observations?

Timothy Leary tells the story of his time in Federal Prison. The first part of the processing was a personality test. He happened to be one of the authors of the test, so he gave answers which labeled him as very passive and obedient to authority. This let him be placed in a very minimum security facility. One night, when no one was looking, he walked out the front door and into a waiting car. Fooled them.
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.

It's a conversation that's been had before. I think the scientific consensus is that it's at least a little inaccurate, but if you poll TF like 90% of the people here are INTJ, so that says something.
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.

It's bullshit, and afaik considered to be on about the same level as horoscopes by the majority of psychologists. Much like horoscopes, the different results are hopelessly vague, and seemingly applicable to almost everyone (leading to the Barnum effect); and notoriously unreliable (let some time pass, and when people retake the test the majority of them will fall into a different category).

I think that the only people who take it seriously are the ones who like the results they got, which make them feel special and awesome (as indeed, you'll feel no matter what result you get since they're all phrased in positive language)
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.

The Meyer's Briggs is not accepted as valid within Psychology itself. Its harshest critics are Psychologists.

Its invalidity doesn't reflect on Psychology as pseudo-science any more than Nurses advocating the efficacy of prayer (happens alot) reflects on biology as a pseudo-science. Meyers-Briggs is basically pop-psych made popular in the corporate world by charlatan "consultants" profiting off of the corporations with lots of $ to blow and lots of motivation to get any competitive edge they can.

A great deal of what happens in hospitals is pseudo-science, but that is also a reflection of why profit motive corrupts any and all science, not an indictment of biology.

Its important to keep in mind that "Psychology" is a broad category that includes basic lab research about human cognition and behavior to clinical practice where an individual is treated for a specific diagnosis, to a group of people who have no particular problems just being given advice to improve themselves in some vague way. The training for these careers have as little in common with each other as does the training to be a genetic scientist versus a pediatrician or a certified gym trainer.

Most psychological researchers are not supportive of much of what is done under the banner of clinical practice. And most clinical practitioners are critical of pop-psych, corporate consultants, etc.. That said, it is legit to hold the APA accountable for its near total failure to enforce any standards of practice by the people that it grants a licence to. Basically, they just need to get through the training hoops, and the get it, and it rarely revoked no matter how unscientific their clinical methods are. Part of the issue is that bad therapy rarely kills your clients or harms them in any provable way. So, malpractice suits are rare, unlike in medicine.

That said, the Meyers-Briggs is not as completely random and baseless as astrology and horoscopes. The planets positions when your are born have zero possible impact on the type of person you are. In contrast, the type of person that your are does have some impact on how you answer various questions about what is most important to you, your preferences, your basic worldview, how you act in various situations, etc..

For example, one Meyers Briggs question is: Do you trust reason rather than feelings? It not a perfect question and its vague, but ask me that everyday and I will always say "Yes". Ask some other people and they will say "No". Odds are high that people's answers to that predict their religiosity, whether they believe in all sorts of paranormal notions, and whether they accept the consensus scientific view on many issues from evolution to vaccines. Some of the questions really suck, but some probably do reflect meaningful and stable differences between people. The real problem arises when they take a bunch of questions like that on totally different things that happen to be correlated, and then try to extract some specific underlying personality dimension as the underlying cause of all the responses. Imagine a car that has quick acceleration, handles corners well, and is compact, and has sleek lines to it. Those are all important qualities. We might slap a label on it of "sporty", but the car has no property of "sporty" and "sportiness" doesn't actually explain its actions related to those qualities. That's a decent analogy for how to think of personality "types".
 
I'm curious as to our resident expert's thoughts on the Meyer's Briggs personality type categorizations... is there really any substance to them? The whole thing smacks of pseudoscience to me.

The Meyer's Briggs is not accepted as valid within Psychology itself. Its harshest critics are Psychologists.

Its invalidity doesn't reflect on Psychology as pseudo-science any more than Nurses advocating the efficacy of prayer (happens alot) reflects on biology as a pseudo-science. Meyers-Briggs is basically pop-psych made popular in the corporate world by charlatan "consultants" profiting off of the corporations with lots of $ to blow and lots of motivation to get any competitive edge they can.

A great deal of what happens in hospitals is pseudo-science, but that is also a reflection of why profit motive corrupts any and all science, not an indictment of biology.

Its important to keep in mind that "Psychology" is a broad category that includes basic lab research about human cognition and behavior to clinical practice where an individual is treated for a specific diagnosis, to a group of people who have no particular problems just being given advice to improve themselves in some vague way. The training for these careers have as little in common with each other as does the training to be a genetic scientist versus a pediatrician or a certified gym trainer.

Most psychological researchers are not supportive of much of what is done under the banner of clinical practice. And most clinical practitioners are critical of pop-psych, corporate consultants, etc.. That said, it is legit to hold the APA accountable for its near total failure to enforce any standards of practice by the people that it grants a licence to. Basically, they just need to get through the training hoops, and the get it, and it rarely revoked no matter how unscientific their clinical methods are. Part of the issue is that bad therapy rarely kills your clients or harms them in any provable way. So, malpractice suits are rare, unlike in medicine.

That said, the Meyers-Briggs is not as completely random and baseless as astrology and horoscopes. The planets positions when your are born have zero possible impact on the type of person you are. In contrast, the type of person that your are does have some impact on how you answer various questions about what is most important to you, your preferences, your basic worldview, how you act in various situations, etc..

For example, one Meyers Briggs question is: Do you trust reason rather than feelings? It not a perfect question and its vague, but ask me that everyday and I will always say "Yes". Ask some other people and they will say "No". Odds are high that people's answers to that predict their religiosity, whether they believe in all sorts of paranormal notions, and whether they accept the consensus scientific view on many issues from evolution to vaccines. Some of the questions really suck, but some probably do reflect meaningful and stable differences between people. The real problem arises when they take a bunch of questions like that on totally different things that happen to be correlated, and then try to extract some specific underlying personality dimension as the underlying cause of all the responses. Imagine a car that has quick acceleration, handles corners well, and is compact, and has sleek lines to it. Those are all important qualities. We might slap a label on it of "sporty", but the car has no property of "sporty" and "sportiness" doesn't actually explain its actions related to those qualities. That's a decent analogy for how to think of personality "types".

Doesn't it seem odd to you that psychologists feel they need to criticize such pseudoscience? Why waste the time on something so outside science? Putting it in religious nationalism terms (non science for sure) they might fear a terrorist takeover threat from that direction if they are in any way insecure about their status as science. Please don't ask about why no parallelism. If you don't get it just leave it lay.
 
Back
Top Bottom