I could be wrong about this, but I suspect that most people, including those without much of a science background, have at least a vague understanding of the peer review process. A scientist submits a paper to a scientific journal. The editor assigns the paper to an associate editor, who then selects roughly 2-4 scientists with a record of publication in the area of the paper and who do not have any apparent conflict of interest with the paper's author(s). The paper goes to the reviewers, typically without identifying the author(s). They review the paper independently and each return their review to the associate editor, addressing its overall merit, as well as specific strengths and weaknesses. The associate editor then makes the decision about whether the paper is accepted without revision (so rare it is almost unheard of), rejected (more common than not), or returned to the authors for revision. Depending on the nature of the needed revisions, a revised paper might be accepted, rejected, or sent out for another round of reviews.
The process works fairly well most of the time. The associate editor making the final decision will rarely have sufficient knowledge of the subject matter to make a determination on his or her own. This is where the reviewers come in. Well-selected reviewers are experts in the field and can effectively critique the paper, determining whether it makes enough of a contribution to be worthy of publication. Problems with the rationale and scope of the contribution to the field, research design, methodology, statistical analyses, and/or interpretation of findings are identified and weighed. If the problems are fairly minor, the review process helps the author(s) produce a much improved paper. If the problems are not so minor, the review process serves a gate-keeping function by preventing shoddy work from being published in reputable journals.
Not surprisingly, there are also a number of problems with this system. Authors often submit appallingly bad papers, some of which are even plagiarized. Reviewers are sometimes able to infer the identity of the author(s). Some reviewers produce incredibly poor reviews that do little to help the associate editors. Associate editors do not always provide the clearest explanations to authors about problems associated with their work. But perhaps the biggest problem of all is that almost nobody in the system, with the exception of some editors, receives any compensation for doing any of this work.
Yep, that's right. There's no money, no rewards, no additional status, and usually little to no recognition. I think that those of us who do it must value science. We must have a sense of obligation to our field(s) of study. I suppose one might say that participating in the process is one of the ways in which we give something back. In addition to producing our own work, we are lending our expertise to evaluate the work of others in our field(s). While it is often thankless work and sometimes quite frustrating, I suppose I sleep somewhat better knowing that I'm contributing in this way.
Sept. 19-25 is the 2nd annual Peer Review Week. The theme is "Recognition for review," which strikes me as quite appropriate. It is described as follows:
Peer Review Week is a global event celebrating the essential role that peer review plays in maintaining scientific quality. The event brings together individuals, institutions, and organizations committed to sharing the central message that good peer review, whatever shape or form it might take, is critical to scholarly communications.Many scientific journals from many different fields are helping to support it, and I think it sounds like a great idea. Coming up with creative ways to provide recognition to those involved in the process would be beneficial. If you know some scientific types, let them know about Peer Review Week.
Now I had better get back to the review I'm working on. It is a promising paper but one that would benefit from some additional work.