• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The case for abolishing traditional peer review

Perspicuo

Veteran Member
Joined
Jan 27, 2011
Messages
1,289
Location
Costa Rica
Basic Beliefs
Empiricist, ergo agnostic
Let's stop pretending peer review works
http://www.vox.com/2015/12/7/9865086/peer-review-science-problems

The researchers then altered the names and university affiliations on the journal manuscripts and resubmitted the papers to the same journal. In theory, these papers should have been high quality — they'd already made it into these prestigious publications. If the process worked well, the studies that were published the first time would be approved for publication again the second time around.

What Peters and Ceci found was surprising. Nearly 90 percent of the peer reviewers who looked at the resubmitted articles recommended against publication this time. In many cases, they said the articles had "serious methodological flaws."

Peer review is "often insulting, usually ignorant, occasionally foolish, and frequently wrong".
 
Let's stop pretending peer review works
http://www.vox.com/2015/12/7/9865086/peer-review-science-problems

The researchers then altered the names and university affiliations on the journal manuscripts and resubmitted the papers to the same journal. In theory, these papers should have been high quality — they'd already made it into these prestigious publications. If the process worked well, the studies that were published the first time would be approved for publication again the second time around.

What Peters and Ceci found was surprising. Nearly 90 percent of the peer reviewers who looked at the resubmitted articles recommended against publication this time. In many cases, they said the articles had "serious methodological flaws."

Peer review is "often insulting, usually ignorant, occasionally foolish, and frequently wrong".
You should submit this for publication.

Peez
 
Yuck. Had this article been peer reviewed, it would have been less likely to have polluted the discourse with such invalid arguments and absurd premises.

Here is the the first place the authors go woefully wrong:

If the process worked well, the studies that were published the first time would be approved for publication again the second time around.

Um, no. Why would the same journal be just as interested in publishing a paper with identical ideas with identical support as papers it just published a few years ago, not to mention the likely dozens of other papers on the same topic various journals would have published in the intervening years?

Journals have limited space. They understandably place high value on what new information a paper adds, which in these cases would have been zero.

And yes, when a paper has zero merits in terms of making a contribution, then reviewers are more critical of any methodological flaws that they might be aware of but let go if the overall contribution is high. A paper getting published in the first, not the last, step in peer review. If the ideas and data are shit, this will eventually come to light. Thus, given that practical constraints are often the source of methodological limitations, reviewers will let some go (though usually require they be noted) if the paper presents novel ideas or novel types of data that are worth other researchers knowing about. But if the ideas and data are not new, then the only contribution would be if it was methodologically superior to anything before it, which these by definition were not. Finally, these wouldn't even count as "replications" because the researchers didn't change the papers to at least acknowledge that the ideas and study designs were unoriginal, which makes the new "fake" authors seem incompetently ignorant of the recent literature.

Peer review is far from perfect and can be improved, but that "study" and this article do more to support its positives than support abolishing it.
 
Last edited:
Yuck. Had this article been peer reviewed, it would have been less likely to have polluted the discourse with such invalid arguments and absurd premises.

Here is the the first place the authors go woefully wrong:

If the process worked well, the studies that were published the first time would be approved for publication again the second time around.

Um, no. Why would the same journal be just as interested in publishing a paper with identical ideas with identical support as papers it just published a few years ago, not to mention the likely dozens of other papers on the same topic various journals would have published in the intervening years?

Journals have limited space. They understandably place high value on what new information a paper adds, which in these cases would have been zero.
.
Then that should have been the cause of rejection. Not metodoogical flaws.
 
I have to read studies that have it past the peer review process all the time.

Many are absolute rubbish.

There is no quality control at all.

These people have publications to put out. They have space to fill. Something has to get in, even if it is crap.
 
And what about grant applications? I know a guy from a top US university who put absolute garbage in his application and got a half a million grant. He got a patent on that garbage too but that's understandable, 99% of patents is garbage anyway.
 
Yuck. Had this article been peer reviewed, it would have been less likely to have polluted the discourse with such invalid arguments and absurd premises.

Here is the the first place the authors go woefully wrong:



Um, no. Why would the same journal be just as interested in publishing a paper with identical ideas with identical support as papers it just published a few years ago, not to mention the likely dozens of other papers on the same topic various journals would have published in the intervening years?

Journals have limited space. They understandably place high value on what new information a paper adds, which in these cases would have been zero.
.
Then that should have been the cause of rejection. Not metodoogical flaws.

Not true, and the rest of my post that you snipped off explains why that would generally not be the case. Adding to what I already said, the VOX article provides no valid info on why the papers were rejected. The only thing it says are "In many cases, they said the articles had "serious methodological flaws."" First, the "many" almost certainly means significantly less than half, because given the transparent for-profit sensationalism of this piece, they would have squeezed every bit of hype they could have out of the facts. Thus anything more than half would have been "the majority" or "most" and anything less than but close to have would have been "nearly half".
That means most rejections did not even mention such flaws.
For the minority % that did mention flaws, this still does not implies that the flaws were the only or even primary reason for rejection. Most papers that get published still mention flaws in the reviews. Thus, the mention of flaws is neither a necessary nor sufficient condition of rejection, so nothing about reasons for rejection can be inferred from them being mentioned. As I explained, flaws becomes objectively more relevant to legit publication decisions when the ideas and data add nothing to what is already published. So, even if the flaws played some role in the rejection of some of the papers, that is perfectly legit, because such flaws undermine the sole area of contribution that such unoriginal papers could make.
 
I don't recall the article mentioning any statistical study, much less a meta-analysis performed on the subject of its concern, peer review.
 
I know people who never smoked and ate healthy but died of disease at 50. Does that make it wise to abandon efforts to start smoking and eating healthy?

Of course peer review is not a magic bullet that eliminates all unscientific drek. But, I have rejected about 90% of the articles I have reviewed because they were shit. I know that most of those never made it into print or were forced to go to a lower tier journal and remove a lot of the shit they were trying to claim. There are many journals, so some still made it into print in all their nonsensical glory, but I'm a cynic, so I never even hope for perfection.

The bigger problem is with people outside the field taking isolated papers and making a big deal of them. Science is a long-term iterative process. Any single study no matter how quality has minimal rational implications. The biggest problem is not that peer review isn't perfect, but that we do not do enough of the other critical things in addition to peer review. For one, we don't do enough replication studies and that is mostly because the most "prestigious" journals won't publish replications. We need more (and its currently happening) outlets for both replications and failed replications. But to encourage researchers to try and replicate more often, we need Universities hiring and promotion committees to get past their simplistic notions of "impact factor" and "prestige" when it comes to judging the value of someone's research. These bodies should also actively encourage academics whose careers are more focused upon integrating theoretical and empirical work of others into coherent summaries and meta-analysis, rather than "original" contributions. There are very little rewards for that kind of work, yet scientific progress and further evaluation of already published works depend heavily on it.

We can also do things to improve peer review itself. One is to make a concerted push for top journals to place less of a premium on hype-worthy theoretical novelty, and more emphasis on empirical and statistical novelty, by which I mean new types of data and/or analyses of data that reveal new information, even if it is about existing ideas. This will help encourage authors to further test, refine, and reject existing ideas when needed rather than inventing new or pseudo-new theoretical frameworks so they can pass the test of theoretical novelty many journals require. This has lead to a clutter of ideas, some which are only superficially different and others that go beyond what the data support in an effort to say something theoretically new. Journals that rely on hype to the non-expert public (like Science and Nature) won't go that way, but the real top journals in specific fields could, because it is almost entirely University researchers and libraries that subscribe to them.

Beyond for-profit journals and those that cater to the general public, another problematic journal format is pay-to-publish journals like Plos ONE. I've seen lots of drek come out of there. On the one hand, being online only allows them to ignore length and accept any article that has sound methods and data, no matter its theoretical claims. This could help with alot of the problems I mentioned above. However, similar to the FDA, they only get paid when the author gets published, so they have a massive conflict of interest biasing them to lower their standards. Also, they openly admit that they let authors spout theoretical rubbish that goes beyond the data. They only review for methods and stats, and even then only a single editor rather than and editor and 2-4 reviewers who usually have more specific expertise than an editor. They need to consider a pay-to-be-reviewed model, where you pay regardless of whether they accept your paper or maybe you have to review a couple papers for them in lieu of payment, which would reduce their editorial costs.
Regardless, people and journalist need to be educated about such journals and especially how profit-motive inherently undermines scientific integrity (both when the journal is for profit and the researcher works for a private company). That should actually be part of the basic science / critical thinking curriculum in grade school.

Of course, all of this is only helpful for progress in fields that actually employ methods capable of empirically testing claims. They would be very helpful in a field like psychology where empirical methods are common and statistical sophistication is actually higher than most hard sciences, but where the nature of the discipline means that measurement of variables is usually logical steps removed from the theoretical concepts of interest, and the data only precludes some but not all alternative theories. These features open the door for too much theoretical bullshit to be slapped onto the data. But peer review likely does little to weed out the drek in Humanities areas where they rarely even try to employ quantified measurement of randomly sampled and systematically recorded observations. In such areas, "peer review" probably amounts to a PC panel making sure that the ideas tow the ideological line of the day (lipstick on a pig and whatnot).
 
Then that should have been the cause of rejection. Not metodoogical flaws.

Not true, and the rest of my post that you snipped off explains why that would generally not be the case. Adding to what I already said, the VOX article provides no valid info on why the papers were rejected. The only thing it says are "In many cases, they said the articles had "serious methodological flaws."" First, the "many" almost certainly means significantly less than half, because given the transparent for-profit sensationalism of this piece, they would have squeezed every bit of hype they could have out of the facts. Thus anything more than half would have been "the majority" or "most" and anything less than but close to have would have been "nearly half".
That means most rejections did not even mention such flaws.
For the minority % that did mention flaws, this still does not implies that the flaws were the only or even primary reason for rejection. Most papers that get published still mention flaws in the reviews. Thus, the mention of flaws is neither a necessary nor sufficient condition of rejection, so nothing about reasons for rejection can be inferred from them being mentioned. As I explained, flaws becomes objectively more relevant to legit publication decisions when the ideas and data add nothing to what is already published. So, even if the flaws played some role in the rejection of some of the papers, that is perfectly legit, because such flaws undermine the sole area of contribution that such unoriginal papers could make.

Sounds like you desperately try to find flaws in their paper and really cant find any juicy ones from the article and instead starts to guess...
Not a pretty sight.
 
I don't recall the article mentioning any statistical study, much less a meta-analysis performed on the subject of its concern, peer review.

???? Read your own OP. The quote you gave was entirely about a study where published articles were resubmitted a few years later with nothing but the author's names and affiliations changed. They reported that 90% of the papers were rejected, and inferred this shows inconsistent peer review standards based upon their glaringly wrong presumption that 100% should have been accepted, given they were already accepted and published before.

The fact that the rest of the article doesn't even pretend to provide empirical support for its claims about peer review is all the more damning of its uselessness as an analysis of the merits of peer review, except as serving as an example of non-reviewed drek.
 
Not true, and the rest of my post that you snipped off explains why that would generally not be the case. Adding to what I already said, the VOX article provides no valid info on why the papers were rejected. The only thing it says are "In many cases, they said the articles had "serious methodological flaws."" First, the "many" almost certainly means significantly less than half, because given the transparent for-profit sensationalism of this piece, they would have squeezed every bit of hype they could have out of the facts. Thus anything more than half would have been "the majority" or "most" and anything less than but close to have would have been "nearly half".
That means most rejections did not even mention such flaws.
For the minority % that did mention flaws, this still does not implies that the flaws were the only or even primary reason for rejection. Most papers that get published still mention flaws in the reviews. Thus, the mention of flaws is neither a necessary nor sufficient condition of rejection, so nothing about reasons for rejection can be inferred from them being mentioned. As I explained, flaws becomes objectively more relevant to legit publication decisions when the ideas and data add nothing to what is already published. So, even if the flaws played some role in the rejection of some of the papers, that is perfectly legit, because such flaws undermine the sole area of contribution that such unoriginal papers could make.

Sounds like you desperately try to find flaws in their paper and really cant find any juicy ones from the article and instead starts to guess...
Not a pretty sight.

No, sounds like you have gullible blind-faith and accept the articles claims despite lack of empirical support and blatant logical fallacies on their part.
You are the one blindly and unreasonably assuming that "many" means "most" and that any mention of flaws means those were the sole basis for rejection. I am doing what rational and competent peer reviewers do. I am pointing out the alternative and far more plausible interpretations that contradict your and their presumptions and make the article's claims unsubstantiated. The burden of proof is 100% of the author and those who accept its claims (you) to show that their interpretation is more empirically supported and/or theoretically plausible than these alternatives.

Again, the irony is that this article is such drek itself that it would have been unlikely to pass peer review at any decent journal about research methods or even philosophy of science. That would have been a good thing, since then it wouldn't be out polluting the minds of the gullible and scientifically ignorant with claims that are at best unsupported or often demonstrably false. Fortunately at least some people know that VOX has the same integrity and intellectual standards as FOX (i.e., none), thus limiting its impact to mostly true-believers.
 
Peer review is done by people so there will be errors and biases. Poor peer review is a problem, but it is solvable. And unless someone comes up with a feasible alternative for quality control, I don't see the point in abolishing it.

IMO, peer review suffers for a number of reasons:
1) there is an ever growing mismatch between manuscripts and competent reviewers,
2) there is little personal incentive to spend adequate time reviewing manuscripts, and
3) many journals push for quick turnaround times,
 
I need more data before coming to the conclusion that "peer review doesn't work".

"This paper is great and trustworthy!" In reality, it should mean something like, "A few scientists have looked at this paper and didn't find anything wrong with it, but that doesn't mean you should take it as gospel. Only time will tell."

Well, of course. No one should take any scientific study as gospel. That's the fault of ignorant people. That's why studies often use words like "suggests".
 
Back
Top Bottom