this is the header

Different forms of peer review policies

Since there are many different expectations of peer review, many different forms of peer review procedures have emerged (Horbach and Halffman 2018). We organized the different forms of peer review based on twelve key attributes. Below is a graphic overview of these different forms, and we briefly discuss some advantages and disadvantages of each form and provide suggestions for further reading.

Timing and selectiveness

At what stage of the publication process does review take place?

Post-publication review
Pre-publication review
No review takes place
Pre-submission review (including registered reports)

More information

Traditionally, peer review occurs between the submission and publication of a manuscript. However, two new peer review procedures have been proposed. Pre-submission peer review, also called ‘registered reports’ or ‘data-free review’, has been introduced to address the growing concern of publication bias favouring positive research results and is thought to prevent data tweaking (Nosek and Lakens 2014). The format entails peer review of research protocols before empirical research is carried out. Usually, review takes place in two stages, the first only based on research questions, motivation of research and intended methodology, and the second stage occurring after data is gathered and analysed to check whether the final manuscript complies with the research protocol. After the first stage, protocols are either rejected or ‘conditionally accepted’: on condition the final manuscript complies with the initial protocol (Center for Open Science 2018). One of the main advantages of this format is that review is carried out at a moment that changes to the experimental set-up can still be made, which is in contrast to traditional peer review where flaws in experimental design can only be detected post-hoc (Center for Open Science 2018). Our data suggests that pre-submission peer review is associated with fewer retractions, although systematic evidence about the advantages of pre-submission review is still limited (Horbach and Halffman 2019).

Besides pre-submission peer review, post-publication peer review has been introduced to facilitate faster knowledge exchange, including scrutiny and qualification of published research findings by a wider audience. The practice has been in use at pre-print servers for several decades already and is currently adopted by several journals as well (Knoepfler 2015, Walker and Rocha da Silva 2015). While evidence about the effects of these innovations is scarce, it does appear to suggest benefits, in particular for pre-submission review.


What quality criteria does your journal use for peer review?

Methodological rigour and correctness
Anticipated impact (either within or outside of science)
Novelty
Fit with journal's scope

More information

There are several aspects of a manuscript that can be taken into account, or deliberately not be taken into account, during the peer review process. The list above contains some of the most commonly used criteria (Horbach and Halffman 2019). Not focusing on the anticipated impact and/or novelty of research findings is thought to decrease publication bias (i.e. publishing only positive results), and practices such as data tweaking and HARKing (PLOS 2018). In fact, a study found that selecting on these criteria is associated with more retractions (Horbach and Halffman 2019). Other studies have shown that reviewers are most likely to comment on aspects of theoretical embedding and framing of a study, rather than its empirical soundness (Strang and Siler 2015). Hence, whatever criteria you believe to be appropriate, it seems important to carefully instruct reviewers about them (Malicki, et al., 2019).


Openness of review

What type of reviewers are included in your journal's peer review process?

Commercial review platforms
Editor-in-chief
Wider community / readers
Editorial committee
External reviewers suggested and selected by editor(s)
External reviewers suggested by authors

More information

In the vast majority of journals manuscripts are reviewed by referees selected by editors (Horbach and Halffman 2019). Alternatively or additionally, manuscripts may be reviewed by an editor-in-chief, an editorial committee, external reviewers selected or suggested by authors, the wider community/readers, and/or commercial review platforms.

While there are reasons to use reviewers suggested by authors (e.g. to identify a wider pool of reviewers) this entails a considerable risk of fake reviews. Either the suggested reviewer can be a made-up person, or the suggested reviewers are real and appropriate experts, but helpfully suggested email addresses lead back to the author, and thus generate a positive review. To prevent fake review while using author suggested reviewers, check the credibility of the institute name of the suggested reviewer, check if is the email address is provided by a third party (e.g. gmail/hotmail), check for indications of conflict of interest, check the appropriateness of the expertise of the reviewer, and check if the suggested reviewer regularly co-authors with the corresponding author (Gao and Zhou 2017, Tancock 2018).

In addition, the wider community/readers and commercial platforms can also be included as peer reviewers, for instance through post-publication review (Research Square 2017). The main advantage of commercial review platforms is that reviews can be delivered faster and more efficiently, since it reduces the likelihood of a manuscript going through multiple rounds of peer review (Horbach and Halffman 2019). The rationale behind the use of the wider community/readers is that more people will critically look at the paper. Furthermore, involvement of the wider community/readers is a way to broaden and diversify the peer review process which is in line with recent trends towards open science (Funtowicz and Ravetz 2001). A potential drawback of this kind of review is that it has turned out to be difficult to engage sufficient numbers of reviewers for all manuscripts (Campbell 2006, Knoepfler 2015). While wider community review has strong advocates, hard evidence that it traces error more effectively is not yet available.


To what extent are authors anonimised in your journal's review process?

Author identities are blinded to editor and reviewer
Author identities are blinded to reviewer but known to editor
Author identities are known to editor and reviewer

More information

As part of the current trend towards open science, some also advocate open peer review (Ross-Hellauer, Deppe et al. 2017). However, concerns regarding open peer review are fairness and equality. There are indications that there are biases in peer review as a result of the social position of authors (Ceci and Peters 1982 , Ross, Gross et al. 2006). To prevent (unconscious) biases, and to promote judgement of manuscripts solely on content, authors’ identities can be blinded to reviewers and/or editors. In general, people are stricter when judging the unknown rather than the known or familiar (Cao, Han et al. 2009). This also seems to be the case for peer review, since blinding the author identity to the reviewers is related to fewer retractions (Horbach and Halffman 2019).

In addition to blinding the authors’ identity to reviewers, some journals also blind the authors’ identity to editors, to prevent (unconscious) bias of editors (Tennant, Dugan et al. 2017). However, due care has to be taken in the blinding process, with various strategies available, ranging from only deleting author names from the title page, to omitting self-citations, to obscuring any potential reference to the authors’ including geographical locations of field work and nationality of research participants. It has been argued that no matter the strategy taken, reviewers are usually capable to identify authors nevertheless, hence questioning the effectiveness of blinding (Pontille and Torny 2014). However, this may depend on the size of research fields. Preferences for different forms of author blinding vary strongly among researchers and consensus on this issue is not likely to come soon.


To what extent are reviewers anonimised in your journal's review process?

Reviewers are anonymous (both to authors and other reviewers as well as to readers of the published manuscript)
Reviewer identities are known to other reviewers of the same manuscript
Reviewer identities are known to the authors
Reviewer identities are known to the readers of the published manuscript

More information

The use of anonymous reviewers is very common, although there are recent calls for open peer review (Ross-Hellauer, Deppe et al. 2017, Horbach and Halffman 2019). “Open peer review” can refer to a wide variety of characteristics of peer review, among which the identity of the reviewers. Traditionally, reviewers’ identities are known only to the editors, but in open review the identities can be disclosed to the authors and the even the readers (Ross-Hellauer 2017). It is argued that open peer review would increase the quality of peer review by leading to more constructive feedback, giving credit to the reviewer, and reducing reviewers’ bias (Godlee 2002). In addition, the disclosure of reviewers’ identities could reduce improper behaviour of reviewers, such as unjustly advising rejection, deliberately delaying publication or plagiarising the manuscript under review (Godlee 2002, Pontille and Torny 2015, Walker and Rocha da Silva 2015, Tennant, Dugan et al. 2017). However, some fear that open peer review could decrease the quality of peer review, since reviewers might fear professional reprisal when submitting critical or negative reviews (Bruce, Chauvin et al. 2016, Ross-Hellauer 2017, Ross-Hellauer, Deppe et al. 2017). This might especially affect early career researchers or others perceiving themselves in vulnerable positions. However, research into this potential fear is rather limited.


To what extent are review reports accessible?

Review reports are accessible to authors and editors
Review reports are accessible to other reviewers
Review reports are accessible to readers of the published manuscript
Review reports are publicly accessible

More information

Traditionally, access to review reports is restricted to authors and editors, which is the case in the vast majority of journals (Horbach and Halffman 2019). Recently, however, there is a trend towards more openness in science, including the review process. Therefore, some journals publish the signed or anonymous review reports alongside the article. There are several potential advantages to publishing review reports. First, readers can make more informed decisions on the validity of the research and its conclusions (Ceci and Peters 1982). Second, it is thought that reviewers will be more precise and constructive when their review reports will be published. Finally, open reports can serve as a guide for young researchers to help them start with reviewing (Ross-Hellauer 2017).

Some studies have aimed to assess whether changing the accessibility of review reports affects the quality of the review, but the results remain inconclusive. Some studies are rather positive and indicate that publishing the review reports does not influence reviewers’ willingness to review, the turn-around time, and publicized comments are longer and more likely to result in a constructive dialogue (Bornmann, Wolf et al. 2012, Bravo, Grimaldo et al. 2019). On the other hand, another study shows that the possibility of publicly accessible review reports decreases the willingness to review and increases the amount of time taken to write a review report (Van Rooyen, Delamothe et al. 2010). In addition, some have expressed their concerns about the already overburdened publication system, being loaded with additional material for researchers to read in their already limited time.


To what extent does your journal's review process allow for interaction between reviewers and authors?

No direct interaction between authors and reviewers is facilitated apart from communicating review reports and author responses via editors
Interaction between reviewers is facilitated
Author's responses to review reports are communicated to the reviewer
Interaction between authors and reviewers is facilitated (on top of formal review reports and formal responses to review reports)

More information

Traditionally, no interaction between authors/reviewers is facilitated apart from communicating formal review reports and responses to those. However, some journals facilitate additional interaction amongst reviewers, or between authors and reviewers. For instance, some journals have implemented discussions forums in which reviewers can discuss their assessments of manuscripts to come to consensus before submitting their reports to the authors (Schekman, Watt et al. 2013). Other journals merely facilitate reviewers asking each other or the editor questions or expressing concerns about a manuscript under review. In addition, some journals have experimented with forums in which authors and reviewers can (openly) discuss a manuscript under review.

In one study, we found that no interaction between authors/reviewers is associated with fewer retractions (Horbach and Halffman 2019). However, another study is positive towards interaction between reviewers. The vast majority (95%) of reviewers thinks that interaction between reviewers is valuable to authors (King 2017). One of the proposed downsides of interaction between reviewers, is an increase in time required to finalise reviews. However, it is thought to be an efficient investment of time since it could prevent several rounds of miscommunication and re-reviewing. Indeed, in a study on interaction between reviewers over 70% of the articles were accepted after a single round of review (King 2017). Some journals do not only support interaction between reviewers, but also between authors and reviewers on top of formal review reports and responses to those.


Specialisation of review

To what extent is your journal's review process structured?

Structured: Review occurs through mandatory forms or checklists to be filled out by reviewers
Unstructured: reviewers are free to choose how to organise their review and are not presented questions or criteria for judgement
Semi-structured: Reviewers are guided by some (open) questions or are presented several criteria for judgement

More information

The tasks of reviewers can be structured to several extents, ranging from very open assignments with very little guidance to strict forms with obligatory assets to be scored using likert scales. Structuring the review process might be specifically relevant in case you want reviewers to check for compliance with reporting guidelines. Several studies have shown this compliance to be low, hence suggesting that more scrutiny is required (Baker, Lidster et al. 2014, Bandrowski, Brush et al. 2016).

In contrast, it is suggested that asking reviewers to score specific aspects of a manuscript potentially distracts their attention from other aspects, thereby effectively narrowing down the scope of review. Hence, structuring review can be particularly beneficial in case a journal wants to focus its review process on certain aspects of submitted manuscripts.


To what extent does your journal's review process use specialist statistical review?

Not applicable (statistics does not play a role in my journal's research area)
No special attention is given to statistical review
Incorporated in review (assessing statistics is part of reviewer's and editor's tasks)
Statistical review is performed by an additional, specialist reviewer
Statistics review is performed through automatic computer software

More information

Statistics review can be performed in several ways, for instance by specialist statistics reviewers or by automated computer software. Both have been indicated to have certain benefits. In one of our studies we found review through specialist reviewers to be associated with fewer retractions (Horbach and Halffman 2019). Another meta-analysis found specialist statistics review to be favourable over ordinary review (Bruce, Chauvin et al. 2016). In addition, statistics review through automated software has shown its ability to detect reporting mistakes and inconsistency in reporting of statistical results (Nuijten, Hartgerink et al. 2016). Several such tools are currently under development or have been released over the past years.


To what extent does your journal accept or use reviews from external sources?

No reviews from external sources are used
Reviews from other (partner) journals accompanying manuscripts rejected elsewhere are used
Reviews from commercial review platforms are used
Reviews performed by the wider community (i.e. not by invited or targeted reviewers) are used (e.g. reviews on public forums)

More information

In new initiatives, review is not necessarily performed by the journal to which the article is submitted. These initiatives most notably include the formation of journal consortia in which reviews of manuscripts rejected at one journal are transferred to other journals. This allows to reduce the number of times a manuscript needs to be reviewed (Horbach and Halffman 2018). In addition, through cooperation between journals, manuscripts might be directed towards the most relevant outlet. However, concerns have been expressed that such cascading review practices might enhance the stratification of publication outlets, favouring traditional journals and potentially hindering the introduction of new outlets.

Besides cooperation with other journals, it also possible to use reviews from commercial platforms and reviews performed by the wider community. The use of reviews from other (partner) journals is related to fewer retractions (Horbach and Halffman 2019). A pilot shows the ability of commercial review platform to detect integrity issues (Pattinson and Prater 2017).


Technological support in review

What forms of digital tools are used in your journal's review process?

No digital tools are used
Digital tools to check references are used (e.g. to check for references to retracted articles, or references to articles published in predatory journals)
Plagiarism detection software is used
Digital tools to assess validity or consistency of statistics are used
Digital tools to detect image manipulation are used

More information

There are several technical applications that could be of help in review, such as, plagiarism scanners, software to detect image manipulation, and software to check references (Burley, Moylan et al. 2017, Scheman and Bennett 2017, Horbach and Halffman 2019). Even though hard-evidence of their benefits is usually scarce, the use of technical support is related to fewer retractions (Horbach and Halffman 2019). In addition, the use of digital tools instead of a manual check could also save time. Examples of digital tools for peer review are: plagiarism detection software, digital tools to assess validity or consistency of statistics, digital tools to detect image manipulation, digital tools to check references, and upcoming machine learning techniques to assess consistency and completeness.


To what extent does your journal's review process allow for reader commentary after publication of a manuscript?

No direct reader commentary is facilitated
Reader commentary is facilitated on the journal's website
Out-of-channel reader commentary is facilitated (e.g. by providing links to commentary on other platforms such as PubPeer)

More information

The facilitation of reader commentary seems to be a useful mechanism to flag issues, and hence contributes to the quality of the published record (Horbach and Halffman 2019). Reader commentary could either be facilitated in-channel, for instance through comment sections on your journals’ webpage. Effectively, this constitutes a form of post-publication review. Alternatively, commentary could be facilitated out-of-channel using other platforms where readers can provide feedback or comments to published manuscripts. Several of such platforms have emerged over the past decade and increasingly provide an additional regulating mechanism in science (Biagioli and Lippman 2020).


Suggestions for further reading

  • Baker, D., et al. (2014). "Two years later: journals are not yet enforcing the ARRIVE guidelines on reporting standards for pre-clinical animal studies." PLoS biology 12(1): e1001756.
  • Bandrowski, A., et al. (2016). "The Resource Identification Initiative: A cultural shift in publishing." Journal of Comparative Neurology 524(1): 8-22.
  • Biagioli, M. and A. Lippman (2020). Gaming the Metrics: Misconduct and Manipulation in Academic Research, Mit Press.
  • Bornmann, L., et al. (2012). "Closed versus open reviewing of journal manuscripts: how far do comments differ in language use?" Scientometrics 91(3): 843-856.
  • Bravo, G., et al. (2019). "The effect of publishing peer review reports on referee behavior in five scholarly journals." Nature communications 10(1): 322.
  • Bruce, R., et al. (2016). "Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis." BMC medicine 14(1): 85.
  • Burley, R., et al. (2017). What might peer review look like in 2030, BioMed Central London, UK.
  • Campbell, P. (2006). Nature Peer Review Trial and Debate, Nature.
  • Cao, H. H., et al. (2009). "Fear of the unknown: Familiarity and economic decisions." Review of Finance 15(1): 173-206.
  • Ceci, S. J. and D. P. Peters (1982). "Peer review: A study of reliability." Change: The Magazine of Higher Learning 14(6): 44-48.
  • Center for Open Science (2018). "Registered Reports: Peer review before results are known to align scientific values and practices - Participating Journals." Retrieved October 4, 2018, from https://cos.io/rr/.
  • Funtowicz, S. and J. R. Ravetz (2001). "Peer review and quality control." International Encyclopaedia of the Social and Behavioural Sciences, Elsevier: 11179-11183.
  • Gao, J. and T. Zhou (2017). "Stamp out fake peer review." Nature 546(7656): 33-33.
  • Godlee, F. (2002). "Making reviewers visible: openness, accountability, and credit." Jama 287(21): 2762-2765.
  • Horbach, S. P. J. M. and W. Halffman (2018). "The changing forms and expectations of peer review." Research integrity and peer review 3(1): 8.
  • Horbach, S. P. J. M. and W. Halffman (2019). "The ability of different peer review procedures to flag problematic publications." Scientometrics 118(1): 339-373.
  • Horbach, S. P. J. M. and W. Halffman (2019). "Journal Peer Review and Editorial Evaluation: Cautious Innovator or Sleepy Giant?" Minerva.
  • King, S. R. (2017). "Peer Review: Consultative review is worth the wait." eLife 6: e32012.
  • Knoepfler, P. (2015). "Reviewing post-publication peer review." Trends in Genetics 31(5): 221-223.
  • Malički, M., et al. (2019). "Journals’ instructions to authors: A cross-sectional study across scientific disciplines." Plos One 14(9): e0222157. https://doi.org/10.1371/journal.pone.0222157
  • Mellor, D. (2016). "Registered Reports: Peer review before results are known to align scientific values and practices.". from https://cos.io/rr/.
  • Nosek, B. A. and D. Lakens (2014). "Registered reports: A method to increase the credibility of published results." Social Psychology 45(3): 137-141.
  • Nuijten, M. B., et al. (2016). "The prevalence of statistical reporting errors in psychology (1985–2013)." Behavior research methods 48(4): 1205-1226.
  • Pattinson, D. and C. Prater (2017). Assessment of the prevalence of integrity issues in submitted manuscripts. eighth international congress on peer review and scientific publication.
  • PLOS (2018). "Guidelines for Reviewers." 2018, from http://journals.plos.org/plosone/s/reviewer-guidelines.
  • Pontille, D. and D. Torny (2014). "The Blind Shall See! The Question of Anonymity in Journal Peer Review." Ada: A Journal of Gender, New Media, and Technology 4
  • Pontille, D. and D. Torny (2015). "From manuscript evaluation to article valuation: the changing technologies of journal peer review." Human Studies 38(1): 57-79.
  • Research Square (2017). "Editorial Checks & Badges." from https://www.researchsquare.com/publishers/badges.
  • Ross-Hellauer, T. (2017). "What is open peer review? A systematic review." F1000Research 6.
  • Ross-Hellauer, T., et al. (2017). "Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers." PloS one 12(12): e0189311.
  • Ross, J. S., et al. (2006). "Effect of blinded peer review on abstract acceptance." Jama 295(14): 1675-1680.
  • Schekman, R., et al. (2013). "The eLife approach to peer review." Elife 2.
  • Scheman, R. and C. N. Bennett (2017). Assessing the outcomes of introducing a digital image quality control review into the publication process for research articles in physiology journals. International congress on peer review and scientific publication.
  • Strang, D. and K. Siler (2015). "Revising as reframing: Original submissions versus published papers in Administrative Science Quarterly, 2005 to 2009." Sociological Theory 33(1): 71-96.
  • Tancock, C. (2018). "When reviewing goes wrong: the ugly side of peer review." s Update.
  • Tennant, J. P., et al. (2017). "A multi-disciplinary perspective on emergent and future innovations in peer review." F1000Research 6.
  • Van Rooyen, S., et al. (2010). "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial." Bmj 341: c5729.
  • Walker, R. and P. Rocha da Silva (2015). "Emerging trends in peer review—a survey." Frontiers in Neuroscience 9(169).