• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Senior scientists "usually wrong"

The upper echelons of the scientific community were yesterday accused of "usually being wrong" and guilty of "a systematic resistance to discovery"
Pseudoscience also has a peer review process of sorts, one characterized by a systematic lack of resistance to discovery. Maybe Calder thinks that system is preferable.

Originally posted by crimresearch

...the peer review process has more than a few screw ups to its credit over the years.
It might be interesting to look at some specific examples, and at the process by which it became apparent that that was in fact the case.
 
drkitten said:


I resubmit that you're not familiar with the peer review process; "a graduate environment" almost by definition will give you a one-sided view of the process, in the same way that a football player will have a one-sided view of the process of refereeing a match. You're also -- forgive my bluntness -- criticising a process that, quite frankly, a typical graduate student is still being trained in.


It is not uncommon for individuals to take the review comments very personally. Sometimes it happens, but not very often.

One of the hardest things to do as a scientist is to overcome the notion that "the reviewers are just being silly." We get stuck in our narrow little window of focus, and it makes it hard to appreciate the views from outside of that narrow perspective. I have learned to interpret such comments as failure on my part to explain the work clearly. It's easy to say, "It's all right there, why can't they see it?" It's harder to admit that you didn't do a good enough job in the first place.




You're making the common mistake of assuming uniformity on the part of peer reviewers. I'd like to point out that every journal is independently run and edited, and at a "typical" journal, there will be anywhere from three to five independent assessments of the quality of the submissions.


When I review papers, I always get curious as to what the other reviewers are saying. Sometimes I see them, and it's interesting to see their perspective.



These are not typically from "senior scientists," as you put it, but from a variety of people at a variety of places in their careers, including a disproportionate number of early-stage assistant/associate professors (because they're the only ones who have time for the amount of paperwork involved in review).

Heck, even if the senior people get it, they usually just pass it on to a post-doc or a senior grad student to read first so they don't have to think as much. Shoot, I'm still a young guy and I even do that (otoh, it's a good learning experience for the students and in the end, I usually just write my own reviews anyway)
 
Soapy Sam said:
Seems to me, science is a very human activity. Two characteristics of humans are the desire to stand out from the crowd and the desire to be part of the establishment. They tend to come at different times of life. The trick is to balance the springs and the shock absorbers, so we smooth out the bumps and potholes , while actually going somewhere
And for many academic scientists, the desire to get paid and the desire to get tenure by publishing as much as possible are paramount, especially at high powered institutions.
Once I was at a party with one such person who was complaining bitterly that a journal had accepted his 18 page submission as a one page research note, and how his U actually weighed the number of pages, divided by the number of coauthors, in some formula used to determine eligibility for tenure.
It just so happns that I was on the board of editors of the journal and had reviewed his paper (this was before blind reviews). It was worth a paragraph "brief note" at best.
 
Jeff Corey said:


It just so happns that I was on the board of editors of the journal and had reviewed his paper (this was before blind reviews). It was worth a paragraph "brief note" at best.

There's an important note buried here that I would like to draw out. As Dr(?) Corey indicates, the peer review process itself is changing and continuously being reviewed, in an effort to make the process as effective as possible. "Blind reviews" are a relatively new technique to address some sociological findings that reviewers can be biased by the author's name and/or institutions.

So, yes, "screw ups" exist. Humans screw up, scientists are human, ergo,.... But the system itself is systematically being refined to eliminate the sort of structural bias of the sort that Calder alleges.
 
Really I don't understand the criticisms of Calder at all when talking about the peer review of papers. I don't have any figures handy, but I would guess that most papers that get rejected during peer review are those that are either obviously wrong, or aren't significant advances on research already done. Even then, if a paper gets rejected from one journal people can always resubmit to another. And another. And so on. I doubt that many get rejected because they're somehow "politically incorrect" or heretical.

Of course there's failures in the process, but usually those are due to the papers that shouldn't have got published falling through the net. And really peer review isn't a particularly exacting standard. It's really just to catch the obvious pap, and for the more prestigious journals, to decide whether the paper is "worthy" of being published there.

And by the way, there are ways of publicising your work without going through peer review. Posters/submitted talks at conferences is one way. Also, there's pre-print archives, such as in physics there's http://www.arXiv.org. You can put a paper on there and ensure there's a wide audience for your work, without going through the usual publication process. (Although in both cases there's still some degree of quality control).

I think that actually one of the real problems is something that Jeff Corey alluded to, in that jobs and grants in accademia are largely decided on the basis of publication record, which gives an incentive for scientists to publish as much as they can. Although this is good to a degree, it also can often mean that some scientists will try to get away with publishing poor quality stuff, which tends to push down the quality of research. Never mind the quality, feel the width kind of thing... One advantage of peer review is that at least it provides some quality control which counteracts some of the worst excesses.
 
Another effect of such pressure might be to encourage sloppy or unethical practices to ensure publishable results. An associate editor on one journal told be about a case where one researcher submitted 4 articles with impressive results in one year. The editors became suspicious when they questioned the author about one where a procedure had allegedly reduced the crime rate in a small Midwest city. They asked to be given the name of the city and its police chief.
The author refused, on the basis of "confidentiality". None of the articles were accepted.
 
geni said:


If you think any piece of published research is flawed you are of course free to prove it.

But you are of course not necessarily free to publish that proof in a peer reviewed publication. You may have to resort to non-peer-reviewed publications and then you suffer the critique that your work hasnt been peer reviewed.

This is especialy troublesome if all you can prove is that the methods of a peer-reviewed-scientist are faulty but cannot prove that his conclusion is innacurate.
 
A related story from New Scientist, 5th June 2004, p19

Nature and the British Medical Journal have been found guilty of routinely publishing numbers that don't add up...

One or more statistical errors appeared in 38% of Nature papers and 25% of BMJ papers...up to 4% of statistically "significant" results reported in these papers may not be.
 
Dymanic said:
Pseudoscience also has a peer review process of sorts, one characterized by a systematic lack of resistance to discovery. Maybe Calder thinks that system is preferable.
That seems unlikely, but I'm not sure what that has to do with a suggestion that peer review might be improved further.

If we decided whether progress was worthwhile on the basis of whether we're already ahead of the woo-woos, we wouldn't bother doing very much at all.
 
Prester John said:


No the studies have varying levels of quality, there are ways of scoring it. Low quality work which has little or no grounding in the currently accepted scientific paradigm has admitedly little chance of publication in a reputable journal.


Unless, perhaps, there is an experiment stated that can nicely cut through the noise and demonstrate that something does work.

I can think of a few such cases, for instance.


Funnily enough as the quality of the science increases the effects of homeopathy decrease. Some sort of correlation but i can't for the life of me work it out.:D

Heh. Well, I'm not talking about homeopathy, for which there is, to date, no credible evidence remaining.
 
Originally posted by iain

I'm not sure what that has to do with a suggestion that peer review might be improved further.
From just that article, 'suggesting improvements' does not seem like quite the spirit of Calder's talk; it looks more like he was suggesting abandoning it all together:

"Calder said the use of peer review, where established scientists decide what research gets published, and the use of review panels that hold the purse strings of university research, were exclusive and had the effect of hindering rather than encouraging new discoveries."

If Calder's objections sound familiar, it is probably because they are among the most worn-out arguments invoked by every crank whose ambition to revolutionize science as we know it was frustrated when his lack of evidence prevented him from finding an audience for his breakthrough theory about Psycho-Spiritual Quantum Spin Gravity. Or whatever. It's called peer review for a reason.
 
The result is a generation of scientists who have become a little too confident that their understanding of the world is more scientifically accurate than it will be proved to be.

Historically, some of the biggest brains have been off the mark with some of their theories. For everything he got right, Einstein maintained a quirk of physics known as quantum entanglement - where information seemingly travels instantaneously from one particle to another, regardless of how far apart they are - was impossible. Scientists have since proved him wrong.

Amusingly ironic combination of paragraphs there as it appears now that Einstein was right about that after all.

Beware of the Cat... ;)
 
"It amounts to a systematic resistance to discovery,"

Surely Occam's razor should be rigorously applied to any claim of discovery?

It's easy to claim discovery when you can't explain results any other way, but surely peer review is there (at least partly) so a wider audience get the opportunity to question whether the application of a known phenomenon has been missed?

There seems to be an odd logic in his article .... He is suggesting that we may or may not discover theories in the future overturning our current scientific understandings, that possibility is converted to probability and then used as a justification for his argument based on the unproven assertion that established scientists find the thought of that too disruptive to the status quo.

He may, or may not, have a point the those holding purse strings are excessively conservative, but the argument he advances seems to be begging the question rather.
 
"Really I don't understand the criticisms of Calder at all when talking about the peer review of papers. I don't have any figures handy, but I would guess that most papers that get rejected during peer review are those that are either obviously wrong, or aren't significant advances on research already done. Even then, if a paper gets rejected from one journal people can always resubmit to another. And another. And so on. I doubt that many get rejected because they're somehow "politically incorrect" or heretical."


I think that pointing out how useful peer review is as a tool to screen out patently bogus material, misses the point.
Keeping whackos out of prestigious journals isn't usually that hard.

But if the extant process not only serves to keep out some controversial but important new research (or at least delay it), AND it gives a false sense of reliabiity to things that are published, then there is probably room for improvement.

The fact that there are only a few instances of the former (and apparently more of the latter) should not be taken as a sign that everything is all right.

And in the name of science, how much of an imperfect system are we willing to settle for?
 
rockoon said:


But you are of course not necessarily free to publish that proof in a peer reviewed publication.

Huh? Who's stopping you? Is this the same senior scientist from the NSF that keeps stealing my car keys?
 
crimresearch said:
The fact that there are only a few instances of the former (and apparently more of the latter) should not be taken as a sign that everything is all right.

Sounds like a strawman, to me. I don't know anyone who thinks that "everything is all right."

There are a ton of discussions going on all over about how to improve the review process, for both the journals and the funding organizations. To pretend that we use the system(s) we use now because we think they are perfect is mistaken. We use it because it has been about the best process that we could develop, in terms of weeding out the problems while allowing for new conclusions. That doesn't mean it is perfect, and there are many instances of where it has failed, but that is more in terms of the level of acceptance, as opposed to the level of rejection.

I don't see peer review as perfect, but the problem is certainly _not_ that it prevents new breakthroughs from happening. It is more that crap can still get through.

OTOH, as has been noted, if there is incorrect stuff that gets through, the process also allows that to be corrected through future work. This happens all the time. The biggest problem that happens is when someone sees an article published in 1985 but doesn't see the counter-article published in 1989, and all the subsequent follow-ups.

It requires knowledge of the literature of the field, which can be hard to keep up with as the journal base grows and grows. Fortunately, most legitimate scientists have a decent knowledge of it, and get into trouble if they don't (because of peer review). The ones who really get in trouble are the quote miners with no actual work of their own who search for anything to support their position, regardless of context.
 
crimresearch said:
I think that pointing out how useful peer review is as a tool to screen out patently bogus material, misses the point.
Keeping whackos out of prestigious journals isn't usually that hard.

Well usually the way "whackos" are kept out of prestigious journals is by peer review. How else are you going to do it?

But if the extant process not only serves to keep out some controversial but important new research (or at least delay it), AND it gives a false sense of reliabiity to things that are published, then there is probably room for improvement.

The fact that there are only a few instances of the former (and apparently more of the latter) should not be taken as a sign that everything is all right.

And in the name of science, how much of an imperfect system are we willing to settle for?

I don't think anybody is saying that peer review is perfect. The point is whether there is anything better to replace it with.
 
pgwenthold said:
I don't see peer review as perfect, but the problem is certainly _not_ that it prevents new breakthroughs from happening. It is more that crap can still get through.
Precisely!

I wrote a review longer than the original paper regarding a seriously over-egged assessment of a test for pro-ANP. It so happened that I knew a lot about it because I'd been persuaded to try the assay out about six months earlier, and had had to abandon it because it didn't do what it said on the box. As a result I'd enquired quite closely into the background of the company's product evaluation, and had spotted all the flaws. So when the paper arrived on my desk, I was ready for it.

The editor wrote to say that after careful consideration he'd decided to publish it, because the reports of the other two referees had been favourable. I don't know who the other two people were but they could well have been cardiologists rather than biochemists, and they could well simply not have spotted the flaws in the paper. What would have helped would have been the ability of the referees to see each others' reports - I still believe that if the other two had read what I'd written, they'd have agreed with me.

Rolfe.
 
Again, missing the point.

Defending the system, minimizing its imperfections, pointing out that other things are worse, and engaging in debate sophistry to avoid looking at the problem, simply isn't science. It may be academia, but it isn't science.

The number of peer reviewed articles that are showing up with 'manipulated' research, or orginal raw data that gets 'lost' is unacceptable. These occurences should be as close to zero as possible, not just shrugged away.

It makes no more sense to reject criticism of peer review than it does to keep using an uncalibrated test instrument.
 
crimresearch said:


The number of peer reviewed articles that are showing up with 'manipulated' research, or orginal raw data that gets 'lost' is unacceptable. These occurences should be as close to zero as possible, not just shrugged away.


"As close to zero as possible."

I would like to submit that the current system achieves that. Specifically, unless there is a way to reduce those occurrences, and furthermore we as humanity know about that way, then it's not possible for us to achieve any improvement. Almost no one is minimizing or ignoring the problems -- there simply are no solutions on the table that don't make the problems worse instead of better.

If you have suggestions, I'd love to hear them.
 

Back
Top Bottom