Students often tell us they’ve been asked to find ‘peer reviewed’ journal articles for an assignment. Librarians and academics sometimes talk about peer review in very general terms – ‘rigorous editorial process’, ‘evaluated by experts’, ‘reliable academic standard’ and so on. The language we use to talk about peer review tends to be quite positive and therefore it’s no surprise that on campus there seems to be a general consensus that peer review is a good indicator of high quality information.
When we’re searching for literature we regularly apply the ‘peer review’ filter to help us get to that information more quickly but do we consider in any great depth what is going on behind the magic button? Tools that help us get to the information in a speedy and efficient way are useful but it is also important to have some understanding of what the tool can and cannot do.
The peer review filter can limit to peer reviewed journal titles. However, editorials, news items and book reviews that are published in peer reviewed journal titles don’t go through the same editorial process as a research article. The peer review filter alone cannot eliminate these items from a database search although the number of such items may be quite limited.
It is also useful to know that the phrase peer review is an umbrella term that describes a growing number of ways in which academic literature is evaluated for quality. Traditional methods have focused around single, double and triple blind reviews which anonymise either the reviewer but not the author or both, or both in addition to the members of the editorial board. Some criticisms of these methods have arisen including lack of transparency, potential conservatism amongst publishers and potential for bias (including cultural and gender bias) not to mention the expense and time involved in what can be quite a protracted process. This has implications not only for those who are involved in the publication of an article but also for the wider scientific community and anyone who requires to evaluate the scholarly literature.
An increase in open access publishing platforms has ushered in new and innovative methods of peer review. PLoS ONE and F1000 Research are examples of platforms where publishers have attempted to create a more transparent approach. They still utilise peer review but often apply different criteria and / or methods with less weight given to impact factor and greater concentration on technical rigour. In some cases supplementary information including raw source data, intermediate results and full reports of the reviewers are made avalabile to readers alongside the journal article. This can create an enhanced reading experience. On the flip side, the (gold) open access business model requires a fee to be paid to the publisher and this has raised questions around the balance between income generation and quality control.
Major controversy has also arisen in the past where authors have been allowed to suggest their own reviewers. This has, perhaps unsurprisingly, proven open to abuse and in 2015 a number stories relating to faked peer reviews emerged. Basic measures can and have been introduced to preclude this from happening further.
Many subscription publisher websites and open access platforms now provide definitions and some information about the process(es) that they employ while others remain scant on the detail. Wiley are a good example of a subscription publisher making a concerted effort to provide clear information around peer review processes and they are also beginning to experiment with new open and collaborative methods. It’s always worth having a look to try and find what methods of evaluation an article has been subject to.
In 2011 the UK Government published the report of an enquiry into the state of peer review in scientific publishing. If you want to delve further into the process and read current debate and testimony from senior figures in the academic, scientific and publishing communities this is a good place to start. The report acknowledges peer review as an essential mechanism for moderating the quality of scientific research. Recommendations include enhanced training for reviewers and researchers alike, increasing collaboration between stakeholders, and further experimentation with more transparent models to secure an increasingly reliable quality of peer review across the spectrum of scientific publishing.
You can find out more about peer review via the library website at https://www.gcu.ac.uk/library/pilot/publication/peerreview/ and, if you are a researcher, about open access policies and publishing at https://www.gcu.ac.uk/library/servicesforstaff/openaccessatgcu/
CALLAWAY, E., 2015. Faked peer reviews prompt 64 retractions. Nature [online]. [viewed 05 June 2018]. Available from: doi: 10.1038/nature.2015.18202.
FACULTY OF 1000., 2018. Tips for Finding Suitable Referees – F1000Research. [viewed 05/06/2018]. Available from: https://f1000research.com/for-authors/tips-for-finding-referees.
FOX, C.W., BURNS, C.S. & MEYER, J.A., 2016. Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal. Functional Ecology [online]. 30(1), pp. 140-153. [viewed 05 June 2018]. Available from: doi: 10.1111/1365-2435.12529
HOUSE OF COMMONS SCIENCE AND TECHNOLOGY COMMITTEE., 2011. Peer review in scientific publications. London: The Stationery Office Ltd. [viewed 05 June 2018]. Available from: https://publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/856.pdf
PUBLIC LIBRARY OF SCIENCE., 2018. Editorial and peer review process. PLoS ONE. [viewed 05 June 2018]. Available from: http://journals.plos.org/plosone/s/editorial-and-peer-review-process