this is the header

Thank you

Thank you for sharing your journal’s editorial procedures on our Platform for Responsible Editorial Policies (PREP). We will shortly check your submission and subsequently add it to the platform’s database. In case you have any questions or remarks, please feel free to contact us via the website’s contact form. If, in the future, you change your editorial procedures, please let us know by resubmitting your editorial procedures.

Below you can find an overview of your submission as well as tailored suggestions on how to change or improve your editorial workflow. For more information on the various options for organising peer review, visit the information page.

Journal: Revista Tecnología, Ciencia y Educación

ISSN: 2444-2887

Publisher: Centro de Estudios Financieros

Timing and selectiveness

At what stage of the publication process does review take place?

Post-publication review
Pre-publication review
No review takes place
Pre-submission review (including registered reports)

Suggestion

Traditionally, peer review occurs between the submission and publication of a manuscript. However, two new peer review procedures have been proposed. Pre-submission peer review has been introduced to address the growing concern of publication bias favouring positive research results and is thought to prevent data tweaking. This involves peer review of research protocols before empirical research is carried out, or ‘registered reports’. Our data suggests that pre-submission peer review is associated with fewer retractions, although systematic evidence about the advantages of pre-submission review is still limited (Horbach and Halffman 2019). Besides pre-submission peer review, post-publication peer review has been introduced to facilitate faster knowledge exchange, including scrutiny and qualification of published research findings by a wider audience. While evidence about the effects of these innovations is scarce, it does appear to suggest benefits, in particular for pre-submission review, and could therefore be worth considering.

What quality criteria does your journal use for peer review?

Methodological rigour and correctness
Anticipated impact (either within or outside of science)
Novelty
Fit with journal's scope
Other: Originality of the manuscript. • Methodology. • Quality of the results and conclusions, and consiste

Suggestion

Focusing on the anticipated impact and/or novelty could contribute to publication bias of positive results, as well as potentially incentivise data tweaking. In fact, a study found that selecting on these criteria is associated with more retractions (Horbach and Halffman 2019). While these selection principles are legitimate criteria, we would recommend to consider other selection criteria on top of, or instead of anticipated impact and/or novelty.

Openness of review

What type of reviewers are included in your journal's peer review process?

Commercial review platforms
Editor-in-chief
Wider community / readers
Editorial committee
External reviewers suggested and selected by editor(s)
External reviewers suggested by authors

Suggestion

The wider community/readers and commercial platforms could also be included as peer reviewers, for instance through post-publication review. The main advantage of commercial review platforms is that reviews can be done faster and more efficiently, since it reduces the likelihood of a manuscript going through multiple rounds of peer review (Horbach and Halffman 2019). The rationale behind the use of the wider community/readers is that more people will critically look at the paper. Furthermore, involvement of the wider community/readers is a way to broaden and diversify the peer review process which is in line with recent trends towards open science (Funtowicz and Ravetz 2001). Therefore, these options might be worthwhile to consider for your journal.

To what extent are authors anonimised in your journal's review process?

Author identities are blinded to editor and reviewer
Author identities are blinded to reviewer but known to editor
Author identities are known to editor and reviewer

Suggestion

The reason for blinding the identities of authors in the peer review process is to prevent (unconscious) biases, and to promote judgement of manuscripts solely on content. In general, people are stricter when judging the unknown rather than the known or familiar (Cao, Han et al. 2009). This also seems to be the case for peer review, since blinding the author identity to the reviewers is related to fewer retractions (Horbach and Halffman 2019). In addition to blinding the authors’ identity to reviewers, some journals also blind the author’s identity to editors, to prevent (unconscious) bias of editors. The effect of this extra blinding remains to be elucidated. However, if your journal faces issues with diversity and suspects (unconscious) biases to play a role in it, it might be worthwhile to consider blinding the authors’ identity to editors as well.

To what extent are reviewers anonimised in your journal's review process?

Reviewers are anonymous (both to authors and other reviewers as well as to readers of the published manuscript)
Reviewer identities are known to other reviewers of the same manuscript
Reviewer identities are known to the authors
Reviewer identities are known to the readers of the published manuscript

Suggestion

The use of anonymous reviewers is very common, although there are recent calls for open identities in peer review (Ross-Hellauer, Deppe et al. 2017). It is suggested that open identities would increase the quality of peer review by leading to more constructive feedback, giving credit to the reviewer, and reducing reviewers’ bias. However, some fear that open identities could decrease the quality of peer review, since reviewers might fear professional reprisal when submitting critical of negative reviews (Horbach and Halffman 2018). This might especially affect early career researchers or others perceiving themselves in vulnerable positions. Therefore, it might be worthwhile to look into disclosing the identities of reviewers to other reviewers, authors, and/or readers of the article. However, there also seem to be good reasons to maintain your current review procedure.

To what extent are review reports accessible?

Review reports are accessible to authors and editors
Review reports are accessible to other reviewers
Review reports are accessible to readers of the published manuscript
Review reports are publicly accessible

Suggestion

Traditionally, access to review reports is restricted to authors and editors. Recently, however, there is a trend towards more openness in science, including the review process. Therefore, some journals publish the (anonymous) review reports alongside the article. In this way, readers can make more informed decisions on the validity of the research and its conclusions. In addition, it is thought that reviewers will be more precise and constructive when their review reports will be published (Ross-Hellauer 2017). Some studies have aimed to assess whether changing the accessibility of review reports affects the quality of the review, but the results remain inconclusive. Some studies are rather positive and indicate that publishing the review reports does not influence reviewers’ willingness to review, the turn-around time, and publicized comments are longer and more likely to result in a constructive dialogue (Bornmann, Wolf et al. 2012, Bravo, Grimaldo et al. 2019). On the other hand, another study shows that the possibility of publicly accessible review reports results in a decrease in the willingness to review and an increase in the amount of time taken to write a review report (Van Rooyen, Delamothe et al. 2010). Nonetheless, sharing reviewer reports outside the scope of merely authors and editors seems to carry along several benefits to the review process and hence seems worthwhile to consider for your journal.

To what extent does your journal's review process allow for interaction between reviewers and authors?

No direct interaction between authors and reviewers is facilitated apart from communicating review reports and author responses via editors
Interaction between reviewers is facilitated
Author's responses to review reports are communicated to the reviewer
Interaction between authors and reviewers is facilitated (on top of formal review reports and formal responses to review reports)

Suggestion

Traditionally, no interaction between authors/reviewers is facilitated apart from communicating formal review reports and responses to those. However, some journals facilitate additional interaction amongst reviewers, or between authors and reviewers. In one study, we found that no interaction between authors/reviewers is associated with fewer retractions (Horbach and Halffman 2019). However, another study is positive towards interaction between reviewers. The vast majority (95%) of reviewers thinks that interaction between reviewers is valuable to authors (King 2017). One of the proposed downsides of interaction between reviewers is an increase in time required to finalise reviews. However, it is thought to be an efficient investment of time since it could prevent several rounds of miscommunication and re-reviewing. Indeed, in a study on interaction between reviewers over 70% of the articles were accepted after a single round of review (King 2017). Some journals do not only support interaction between reviewers, but also between authors and reviewers. However, the effects of these interactions remain to be elucidated. Therefore, there are reasons to maintain you current review procedure, but other levels of interaction might be worthwhile exploring. In any case, reviewers usually appreciate to be notified about the final decision on a manuscript they reviewed.

Specialisation of review

To what extent is your journal's review process structured?

Structured: Review occurs through mandatory forms or checklists to be filled out by reviewers
Unstructured: reviewers are free to choose how to organise their review and are not presented questions or criteria for judgement
Semi-structured: Reviewers are guided by some (open) questions or are presented several criteria for judgement

Suggestion

The tasks of reviewers can be structured to several extents, ranging from very open assignments with very little guidance to strict forms with obligatory assets to be scored using likert scales. Structuring the review process might be specifically relevant in case you want reviewers to check for compliance with reporting guidelines. Several studies have shown this compliance to be low, hence suggesting that more scrutiny is required (Baker, Lidster et al. 2014, Bandrowski, Brush et al. 2016). In contrast, it is suggested that asking reviewers to score specific aspects of a manuscript potentially distracts their attention from other aspects, thereby effectively narrowing down the scope of review. Therefore, depending on your expectations of review, it might be worthwhile to reconsider your review procedures.

To what extent does your journal's review process use specialist statistical review?

Not applicable (statistics does not play a role in my journal's research area)
No special attention is given to statistical review
Incorporated in review (assessing statistics is part of reviewer's and editor's tasks)
Statistical review is performed by an additional, specialist reviewer
Statistics review is performed through automatic computer software

Suggestion

Statistics review can be performed in several ways, besides the incorporation of the statistical review in the standard review process, statistics review could be performed by specialist statistics reviewers or by automated computer software. Both have been indicated to have certain benefits. In one of our studies we found review through specialist reviewers to be associated with fewer retractions (Horbach and Halffman 2019). Another meta-analysis found specialist statistics review to be favourable over ordinary review (Bruce, Chauvin et al. 2016). In addition, statistics review through automated software has shown its ability to detect reporting mistakes and inconsistency in reporting of statistical results (Nuijten, Hartgerink et al. 2016). Based on this, it might be worthwhile to consider specialist statistics reviewers or automated computer software for statistical review.

To what extent does your journal accept or use reviews from external sources?

No reviews from external sources are used
Reviews from other (partner) journals accompanying manuscripts rejected elsewhere are used
Reviews from commercial review platforms are used
Reviews performed by the wider community (i.e. not by invited or targeted reviewers) are used (e.g. reviews on public forums)
Other

Suggestion

Through cooperation with (partner) journals, it is possible to reduce the number of times a manuscript needs to be reviewed. On top, the use of reviews from other (partner) journals is related to fewer retractions (Horbach and Halffman 2019). In addition, it is possible to use reviews from commercial platforms and reviews performed by the wider community. There is still insufficient data to draw conclusions on the benefits and drawbacks of these external sources. However, the positive effect of sharing review reports with partner journals is promising. Therefore, it might be worthwhile to consider the use of reviews reports from (partner) journals and reviews performed by commercial platforms and/or the wider community.

Technological support in review

What forms of digital tools are used in your journal's review process?

No digital tools are used
Digital tools to check references are used (e.g. to check for references to retracted articles, or references to articles published in predatory journals)
Plagiarism detection software is used
Digital tools to assess validity or consistency of statistics are used
Digital tools to detect image manipulation are used
Other

Suggestion

There are several technical applications that could be of help in review, and the use of technical support is related to fewer retractions (Horbach and Halffman 2019). Therefore, it might be worthwhile to consider the use of more digital tools in addition to plagiarism detection software. Even though hard-evidence of their benefits is usually scarce, you could consider digital tools to asses validity or consistency of statistics, digital tools to detect image manipulation, digital tools to check references, and upcoming machine learning techniques to assess consistency and completeness.

To what extent does your journal's review process allow for reader commentary after publication of a manuscript?

No direct reader commentary is facilitated
Reader commentary is facilitated on the journal's website
Out-of-channel reader commentary is facilitated (e.g. by providing links to commentary on other platforms such as PubPeer)

Suggestion

The facilitation of reader commentary seems to be a useful mechanism to flag issues, and hence contributes to the quality of the published record (Horbach and Halffman 2019). Therefore, it might be worthwhile to consider facilitating reader commentary, either in-channel, on your journal’s webpage, or out-of-channel using other platforms.