Reviewer at the National Institute of Health (Wikipedia) |
We’ve posted a few blogs on this site relating to doctoral candidates giving and receiving peer review in writing groups, and the feedback that supervisors provide on writing.
Another kind of peer review that PhD students can receive is that generated by journal reviewers, which raises a different set of concerns from those we’ve discussed elsewhere.
PhD students are often encouraged to publish their research in academic journals, but it can be quite daunting to send work out to an unknown audience who will judge whether it is worthy or unworthy of publication.
Everyone has stories about receiving harsh, unfair reviews when they’ve submitted their work to journals. However much we try to tell ourselves and our students that ‘it isn’t personal’, it does feel personal at the time of getting negative responses.
As Kate Chanock amusingly points out, the process can feel rather like going through the stages of grief - but in this case, it’s the Seven Stages of Resentment. Of course she is being tongue in cheek, but there is more than a hint of truth in what she says here.
Despite the problems with peer review, it underpins most academic work as the usual process for assessing grant applications through to publishing the results of those grants. It seems to be the best system we can come up with. So what are the problems?
One of the challenges in the system of peer review is the long delays this process can incur. It can be difficult for journal editors to find suitable reviewers willing to take on the work. Not only do many academics these days find themselves confronting ever-increasing workloads in their official jobs, but in most disciplines they are asked to do this extra work for no pay and no recognition by their institution.
Editors must rely on the ‘gift economy’ operating in academia, hoping that reviewers will subscribe to the belief that what goes around, comes around - by doing their share of reviewing, someone else will review their own article when they later submit to a journal.
Further delays occur when well-meaning reviewers agree to do the work, and then find themselves overwhelmed by other tasks and responsibilities. From an editor’s point of view, very subtle nagging skills are needed to coax this voluntary work out of reviewers; from an author’s point of view, a great deal might be hanging on the outcome of the review.
And when those reviews do finally arrive, how helpful are they? In most areas, the standard practice is blind review - double (where the identities of both author and reviewer are anonymous) or single (where the identity of the reviewer is unknown to the author). In theory this anonymity is sensible, and protects the identities of reviewers so that they can be frank about their assessment of manuscripts without risking damage to their own careers.
Unfortunately, this anonymity sometimes allows those reviewers to be vicious in ways that they might consider highly inappropriate if they were to speak openly to the authors.
Whether the reviews turn out to be positive or negative, they are really just two or three people’s points of view – a fourth reviewer may want something else again. It’s perfectly possible to get contradictory reports from different reviewers, suggesting that there is always an element of chance in what ends up getting published.
Even with the best intentions of attempting to be objective and constructive, reviewers can submit entirely different reviews of the same piece of research - they may have particular interests, specialised knowledge, or be focused on different aspects of the writing.
There are moves afoot to try to solve at least some of the weaknesses of peer review. One response has been to implement processes of ‘open review’, that is, where the identity of the reviewer is made public and reviews themselves published. While this might encourage more courteous behaviour on the part of the reviewer, the potential risks associated with a junior researcher criticising someone with a big reputation in their field remains.
In some disciplines, everyone has a pretty good idea of what projects are being undertaken by other research groups and where the funding went, so that author identity is a matter of informed guesswork if not overtly known; in these situations, open review dispenses with the pretence of author anonymity.
Post-publication review is another model that might be useful. This allows publication of research and then invites anyone who is interested in the topic to review the work. Such an approach fits well with contemporary practices of commenting on social media. While this system might draw some ill-considered reviews and may or may not be anonymous, on the whole it would seem to be a good way of encouraging debate and ongoing conversations in the field.
In an era when research output is endlessly measured and quantified, the work of reviewing that output could perhaps also be measured in order to provide reviewers with more reward for their effort.
Publons is one organisation that is trying to make it possible for reviewers to get some credit for the work they put into reviewing; another is the ‘R-index’ suggested by Gero and Cantor. These are both ways of recognising the work of reviewing as having a measurable ‘impact’ and contribution to the development of the discipline and the dissemination of knowledge.
All these concerns are becoming ever more pressing as the move towards open access gains momentum. As the whole landscape of academic publishing is changing, these are important questions for all researchers to consider, and pose major challenges for doctoral candidates, their supervisors and learning advisors supporting them.
What’s your experience of peer review? Do you have any solutions to the current problems, or ideas about how the system might be improved? (Which reminds me, I’d better get back to finishing the review that is waiting on my desk …!).
No comments:
Post a Comment