Verity Warne
Verity Warne
Senior Marketing Manager, Author Marketing, Wiley

In 2013, more than 1.4 million scholarly articles were published in peer reviewed journals. As the rate of scholarly articles published each year increases, so does the pressure on peer review. In the 2009 Sense About Science peer review survey , 67% of respondents felt that formal training of reviewers would improve the quality of reviews. While we know that Principal Investigators may ask Early Career Researchers to undertake reviews, the same survey reported that few established reviewers train younger colleagues as part of the review process itself – just 3.2%.  So, researchers are telling us that despite the importance of peer review, and the amount of review activity most researchers undertake, there is a lack of guidance about how to perform a good review, and reviewers are expected to learn on the job.457207757_285437071_285437076_256224451.jpg

In December, we held a webinar aimed at Early Career Researchers called Getting Peer Review Right.  The 3rd in our series of webinars aimed at ECRs, this topic clearly hit a chord, with a high turn-out – despite taking place during the hectic holiday season.

The recording of the session is available here: https://www.brighttalk.com/webcast/11201/134767

For those still busy writing New Year’s resolutions, here are some of the takeaways that emerged from the session:

Why be a reviewer?

Michael Willis (president elect of ISMTE) highlighted the personal benefits to ECRs of becoming a reviewer – namely, the ‘virtuous circle’ of peer review and engagement by the research community. Review activity is reflected back into the way the reviewer undertakes his or her own work. It improves the reviewer’s work, as well as that of the authors whose work is being reviewed:  “You conduct your own research and, alongside this, you review the research output of others. This enables you to find out what others are doing in your particular field, as well as to critique that research – not just the articles you formally peer review but also the broader research being undertaken in the area. That in turn will inform the way you conduct your own research – the methods you use, the approaches you take, the ideas you pursue – and it will eventually lead to you adapting or even continuing firmly in the way you conduct your own research.”

 

What do editors want from reviewers?

Matthias Starck (Editor in Chief of Journal of Morphology) emphasized honesty and best ethical practice as the foundations of good peer review. He also highlighted the importance of detailed feedback in order to help editors make decision judgments. While decision recommendations from reviewers are helpful, the editor’s final decision will be made based on the full referee report.  For Professor Starck, a critical reviewer is preferable to a lenient one. “It is easier to handle excessive critique than false positive reviews.”

How do I write a good peer review?

Dr John Langley (Head of Characterisation and Analytics, Chemistry, University of Southampton), and Paul Trevorrow (Executive Journals Editor, Wiley) provided a detailed, highly-practical walk-through of the reviewer report,  typically compiled of a referee’s questionnaire, narrative report for the author and comments to the editor – top tips include:

• Look at the questionnaire first to remind yourself of the questions to be considered. Then undertake the narrative report and reflect on the questionnaire subsequently.

• Use simple, clear language throughout the report, bearing in mind that not all authors/editors are native English language speakers. Language should be adequately technical but avoid verbosity and any claims should be substantiated.

• Reviewers sometimes struggle with whether to label specific types of feedback as major or minor issues – generally speaking any comments requiring extra work for the researcher (re-drawing of figures, new experiments) count as major issues.  Minor issues tend to hinge around text issues and use of language, however there are plenty of grey areas within this category. Ultimately the reviewer’s role is to highlight the need for revisions and as long as they are flagged somewhere, the editor can decide how to classify them.

The ethics of peer review: Emily Jesper from Sense About Science raised the following key points:

• Keep it confidential.  Destroy submitted manuscripts after you have completed the review.

• If you plan to ask a colleague to do the review, ask the journal first and make sure they receive the credit.

• Provide a timely review (reviewers shouldn’t slow down publication to enable them to get a paper out first!).

• Declare conflict of interest, either real or perceived.

• Don’t make hostile, insulting or defamatory remarks. Rather, support your points with evidence.

• Don’t request that the author cite your own papers, (unless there is a strong scholarly rationale for this).

• Was the research ethical? Has there been any misconduct? Is it clear to the reader who funded the study? (If there is no specific funding, then this should be stated.) Raise suspicions of plagiarism or redundant publication (self-plagiarism).

The following delegate questions were not answered during the webinar, so our presenters have provided responses below:

 

How do I build confidence in my peer reviewing skills?

Like most aspects of life, confidence is often a function of experience, however we all have to start somewhere! Much like learning to drive, enlisting the support of an experienced mentor and/or familiarizing yourself with the theoretical and mechanical aspects of peer review should help with confidence building.

I strongly suggest Wiley includes a mandatory authorship contribution section. This would aid in preventing gift-ghost authorship by providing upfront guidelines.

Our Publishing Ethics guidelines strongly encourage editors to adopt the ICMJE guidelines on contributorship, and we recommend the following in Section 5.1, ‘Authorship’: ‘Journals should consider requesting that authors provide a short description of each author's contribution in an Acknowledgment.’

Usually, we check plagiarism with the help of different softwares. I have noticed different scientific plagiarism softwares give us different values of uniqueness, what should be done in those cases? Moreover, some software packages are very sensitive and even report from various unrelated sources like newspapers and government policies of related and unrelated fields, should we consider those as general terms and ignore percentage of uniqueness?

Plagiarism checking software is a useful tool for checking against published text. The more published text you can check against, the better. The purpose of the software is not to give you a definitive answer as to whether something has been plagiarized or not, and you should always check the results carefully rather than rely on a percentage match threshold. Also, bear in mind that plagiarism can consist of unattributed replication of ideas, as well as unattributed copying of verbatim text.

A review questionnaire for a paper I’m reviewing includes a box for answering: "Which figures should be reproduced?" How do you answer this question?

This question appears to ask the referee to highlight the figures that are essential for the understanding of the manuscript. In other words, those images that must be reproduced in the final version of record. That said, if you have any doubts about any of the elements of the questionnaire, you should ask the journal’s editor(s) for clarity. Not only will this help you answer the questionnaire accurately, but it will provide the editors the opportunity to improve the clarity of their referee questionnaire.

Image Credit/Source:Abel Mitja Varela/Getty Images