• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

A Question About Peer-Review

The Mad Hatter

Thinker
Joined
Nov 16, 2005
Messages
128
Hello,

I was debating evolution with a friend, and mentioned the nylon bug. He pointed me to AiG's rebuttal, found at answersingenesis.org/tj/v17/i3/bacteria.asp

He tells me he emailed AiG to ask them if it was peer-reviewed (I think he actually used the term peer-edited, if that makes any difference), and they said it was. This seemed odd, because I thought I had read earlier that no creationist paper had passed peer-review. I also remember reading a TalkOrigins response to it that pointed out some fairly blatant errors. The page says the article was published in a creationist journal, not a scientific one.

So I would like to know: Is there a way to tell if an article is actually peer-reviewed? Is it possible that AiG is using some sort of "religious peer-review" in hopes that people will confuse it with a scientific one? Is there a difference between peer-review and peer-edit?

Thanks,
MH
 
Just so you know, AiG publishes its own articles almost exclusively in its own journal, then claims that process as "peer-review". They are trying to adhere to the letter of the process of science, but are far from the spirit of it.

It's sort of accurate - they have no other peers down in their league!
 
So I would like to know: Is there a way to tell if an article is actually peer-reviewed? Is it possible that AiG is using some sort of "religious peer-review" in hopes that people will confuse it with a scientific one?

No, and yes.

I could easily set up a web-journal that I claimed was peer-reviewed when it wasn't.

I could also easily set up a journal that I claimed was "peer-reviewed" where the editorial board (the peers) were as incompetent as I am, or even one with a legitimate editorial board, but where I routinely overruled them to make sure that the papers that reflected my biases were published.
 
No, and yes.

I could easily set up a web-journal that I claimed was peer-reviewed when it wasn't.

I could also easily set up a journal that I claimed was "peer-reviewed" where the editorial board (the peers) were as incompetent as I am, or even one with a legitimate editorial board, but where I routinely overruled them to make sure that the papers that reflected my biases were published.

See: Roger Coghill and the European Biology and Bioelectromagnetics journal:

http://www.ebab.eu.com/default.asp
 
Who decides who the "peers" are? If it's a creationist journal and articles are reviewed by other creationists, what does that even mean?
 
Who decides who the "peers" are? If it's a creationist journal and articles are reviewed by other creationists, what does that even mean?


"Peer-review" is a first line of defense, nothing more. A lone nutcase will be unable to muster support for his wild ideas and won't be able to get it published anywhere beyond his web pages or a vanity press. A well-organized group of nutcases will be able to self-review, but it will also be obvious when you look at the list of the editorial board that they're all a bunch of like-minded nutcases and you can evaluate the quality of the journal on that basis.

It's rather like "accreditation," in that regard. I could start a university tomorrow if I liked, just by signing a few papers and giving a credit card number. Starting an "accredited" university would be a little bit more difficult, since I would need to find an accreditation board and persuade them that a university without faculty, facilities, or coursework was worth accrediting, and I'd probably have to pay that board a substantial sum of money to do so. Founding a "regionally accredited" university -- one that was recognized by the US Department of Education and authorized to accept Federal student loan money -- would be quite difficult, because I would need to come up with real facilities. But even the first level of accreditation is enough to keep out the truly indigent scam artists....
 
Just so you know, AiG publishes its own articles almost exclusively in its own journal, then claims that process as "peer-review".
Sounds a bit like the process Behe described in his testimony in the Dover Panda Trial:
Q But you actually were a critical reviewer of Pandas, correct; that's what it says in the acknowledgments page of the book?

A That's what it lists there, but that does not mean that I critically reviewed the whole book and commented on it in detail, yes.

Q What did you review and comment on, Professor Behe?

A I reviewed the literature concerning blood clotting, and worked with the editor on the section that became the blood clotting system. So I was principally responsible for that section.

Q So you were reviewing your own work?

A I was helping review or helping edit or helping write the section on blood clotting.

Q Which was your own contribution?

A That's -- yes, that's correct.
 
Who decides who the "peers" are? If it's a creationist journal and articles are reviewed by other creationists, what does that even mean?
editors decide, and sometimes editorial assistants.
what does that even mean?
the problem you point to happens in main stream science also.

when a field is divided on an issue (like the quesiton of continental drift 50 to 100 years ago), you usually have to publish within your own camp. if you have one. anything that is truly new and different is often difficult to publish, unless you are already well known.

what do you find more worrying: the fact that "bad" stuff gets published, or that "good" stuff gets frozen out???
 
Peer review by itself is far less relevant than the quality of the journal.

Every field ranks it's journals in some fashion. Generally, the A journals drive the science, the B's contribute a little bit, and everything else is just stuff to fill a vita or get some people tenure.

Probably the single most important factor determining the quality of a research article is the journal it appears in.
 
Pseudoscience consists, almost by definition, of taking the superficial trappings of real science and figuring out some way to say that they have them.

Creationist "peer review" and the "control problem" in the PEAR activities are classic examples.
 
one might hope the author list was the most relevant factor. no?

No, most authors, even the good ones, will produce all sorts of crap over their lifetime. There's a reason why even a top-flight biologist will not be able to put all of his papers in Nature.

Actually, this belief that the author list determines quality is a hindrance to decent scholarship in at least one area -- law. "Law reviews," the university-run legal journals, are not typically peer-reviewed. Instead, theyr'e run by second- and third-year law students who are not typically either professional scholars or professional publishers. The negative effects of this, in general, are well-documented.

But a well-known side effect is a tendency of law review editors to make decisions about ms. quality, not on the basis of the actual merits of the article (which they're ill-equipped to judge), but by the quality of the other places where the author has published. Or as one lawyer told me a number of years ago, after he had just placed an article in the Harvard Law Review, "well, that's it -- now I'll never have to worry about getting a rejection letter again."
 
evidence please; and are not these two things linked, with all kinds of nasty feedbacks?

The link isn't very strong, and there's not much feedback, simply because "peer review" is not the sort of thing that can be put on a sliding scale. Your journal either has it or it doesn't.... and most of the quite bad journals have peer review, too.
 
NYT Health section
Because findings published in peer-reviewed journals affect patient care, public policy and the authors' academic promotions, journal editors contend that new scientific information should be published in a peer-reviewed journal before it is presented to doctors and the public.

That message, however, has created a widespread misimpression that passing peer review is the scientific equivalent of the Good Housekeeping seal of approval.

Virtually every major scientific and medical journal has been humbled recently by publishing findings that are later discredited. The flurry of episodes has led many people to ask why authors, editors and independent expert reviewers all failed to detect the problems before publication.
 
No, most authors, even the good ones, will produce all sorts of crap over their lifetime.
i do not disagree, but your conclusion does not follow from this fact. i saw the question as which is a better indicator of quality:the author or the journal.

do you really believe the author is not an indicator at all? do you really expect a random article in a "good" journal to be better than a random paper from the cv of the scientist you most respect?the first is unlikely to have any real (longterm) impact, the second much more so, no?
There's a reason why even a top-flight biologist will not be able to put all of his papers in Nature.
could i suggest that you pick an old "top-flight biologist" and ask whether, of all the papers they submitted to Nature, Nature only rejected the worst ones, and only accepted the really good ones?
Actually, this belief that the author list determines quality is a hindrance to decent scholarship in at least one area -- law.
i have no experience in law, and was targeting science and maths; sorry not to have been clear on that.

within science and maths, however, my experience is that a good scientist produces relatively few "bad" papers; she might make mistakes and publish errors on occasion (which are rarely picked up in peer review!) but the fraction of truly bad papers is very low. so i expect a good author is more consistent than a good journal: how would one quantify that?
 
i do not disagree, but your conclusion does not follow from this fact. i saw the question as which is a better indicator of quality:the author or the journal.

And I answered. The journal is a better indicator of quality.

A paper by a "good" author in a mediocre journal is probably not a very good paper.

A paper by a mediocre author in a good journal is probably a good paper.

do you really believe the author is not an indicator at all? do you really expect a random article in a "good" journal to be better than a random paper from the cv of the scientist you most respect?

Yes.

the first is unlikely to have any real (longterm) impact, the second much more so, no?

No, the article in the good journal is likely to have much better long-term impact. In fact, a number of companies (most notably ISI) have established an industry measuring exactly this factor -- it's called "impact factor" or IF -- specifically to measure and compare journal quality so that people can make informed decisions.

Just as an obvious explanation -- every research library in the world gets Nature, and most practicing biologists read it religiously. Any paper published there will get read by tens of thousands of eyeballs. A paper by a Nobel laureate published in an obscure journal with a circulation of 200 will still only be seen by a few hundred people. Hence, less impact.

Within science and maths, however, my experience is that a good scientist produces relatively few "bad" papers; she might make mistakes and publish errors on occasion (which are rarely picked up in peer review!) but the fraction of truly bad papers is very low. so i expect a good author is more consistent than a good journal: how would one quantify that?

The easiest ways are to read the ISI studies about the relationship between journal quality and impact factor. If you insist on re-inventing wheels, do a survey yourself; pick a dozen papers at random and give them to a test group to read and rank. The results are fairly clear that journal is a better predictor than author.
 
I agree completely with Dr. K.

Impact factor pretty much answers the question. The A journals publish articles that get cited lots, motivate further research, and shape the science.

The b's do that somewhat.

Look at the impact factor for a C journal-- I'd bet the majority of articles never get cited even once!

Most highly productive publishers find "homes" for everything they do. So, it's not uncommon for a stud or studette in an area to have tons of A's, but also a fair number of B's, and perhaps even a C or too.

Plus, journal quality is probably the most important factor universities look at when deciding on tenure.

When I was in psych, I was told that one pub in psychological review would get you tenure; whereas 100 in psychological reports would not.
 
I should add to that, that even the rejection rate for a journal is misleading.

The best journals sometimes have a bit higher acceptance rate, even though it's much harder to get published in them-- the reason is that the quality of the journal is so high, people only send their best stuff there. So, the self-selection by authors actually inflates-- artificially-- the journal's acceptance rate.

If it's published in an A, it may not be truth, the conclusions might turn out to be completely wrong (based on future research), but it's high probability the science is good to great.
 

Back
Top Bottom