Noel McGlinchey
Noel McGlinchey
Senior Editorial Assistant

Every Editorial Office experiences difficulties with late reviews. Arguably, the biggest sin a reviewer can commit, outside of dishonesty, is to promise a review and then fail to send it in. Manuscripts become delayed, authors get upset and editors get stressed. Furthermore, while journals with fast turnaround times like to publicize them, it is clear that without effective and timely reviewers no peer review journal could possibly achieve a rapid service for authors.

 

Editorial Offices also tend to send a series of automatic e-mails to reviewers once they have agreed to contribute a review. These start with an initial confirmatory e-mail giving instructions and the date by which the review is due. This is followed by a series of reminder e-mails up to (and beyond) the due date. For the purposes of this story I’ll refer to all automatic e-mails as Reminders. The question a team of interested Peer Review management practitioners gathered from around 30 Wiley Editorial Offices across the globe set out to answer was:

 

Can reviewer turnaround times be improved by better scheduling of Reminders?

 

With so many offices involved in the study, it was possible to study the natural variation in the use of Reminders from historical data (from October 2015) to see how factors such as the days on which Reminders were sent; number of Reminders and the amount of time reviewers were allowed to return their reviews (Time Allowed) affected outcomes between Editorial Offices. The average time (Median Days) for reviews to be returned was considered as well as the longest review time for each office within the time frame studied (Maximum Days). The final results were surprising.

 

Days Allowed for Peer Review

The vast majority of Editorial Offices set Time Allowed at 14 or 21 days. In this time period most offices sent three to five Reminders, with 21 day offices tending to send out more. This wasn’t very surprising but the trend that emerged next was a bit more interesting.

 

Reducing Days Allowed seemed to reduce Maximum Days to Return Reviews (see figure 1) The trends aren’t statistically significant. However, the Median Days line from 21 to 14 Days Allowed shows a clear drop from 38 to 22 in terms of Maximum Days. Late reviews are those that cause the most problems. So, if reducing Days Allowed reduces late reviews it seems well worth giving it a go.

 

NM Slide 1.jpg

Figure 1

 

Of course the data is not statistically significant and (famously) correlation does not show causation. Without a wider range of data (i.e. from a journal with 30 Days Allowed or 7 Days Allowed) it’s difficult for statistical techniques to pick out statistically significant trends. There is also bound to be a lot of natural variation between journals run by different editorial teams in different styles which would also obscure trends.

 

Number of Reminders

With such differences in outcomes correlating with Days Allowed it was important to consider the rest of the data in two sets: a 21 Days Allowed set and a 14 Days Allowed set. Looking specifically at the 21 Days set it’s pretty clear that the number of reminders sent out is very important (see figure 2). If there are fewer than two reminders, Maximum Days will be longer. Increasing the number of reminders above four doesn’t have an effect on Median Days but does seem to have a slight effect on reducing Maximum Days. So, four to five Reminders seems to be the useful limit on a 21 day review period. Further data in journals allowing only 14 days indicates there’s little point sending more than three Reminders. Note: the initial confirmatory e-mail counts as a Reminder.

 

NM Slide 2.jpg

Figure 2

 

Timing of Reminders

When comparing the timing of the last Reminder for 21 Day journals, a last reminder sent on Day 14 would be 7 days before due date while one sent on day 19 would be 2 days before due date (see figure 3). Whether looking at journals sending reminders close to 7 days before the deadline or close to the actual deadline itself; correlations were all less than r-squared = 0.1. This means that there’s a 90% chance that the timing of Reminders has no effect. So, in everyday terms, the timing of Reminders is irrelevant. It’s the number of Reminders and Days Allowed that count.

 

NM Slide 3.jpg

Figure 3

How does this make sense?

Do you have an editor who always answers e-mails on the day you send them? (If so, please can I come and work with you?) Academics, not even editors with a strong commitment, don’t tend to read e-mails as they arrive. They travel, carry out studies, lecture and even sleep. I can imagine a reviewer with only one reminder from ‘The Slow Journal of Slowness’ in her inbox might think that their review isn’t urgent while ‘Fast Journal’ may already have 4 Reminders sitting there unread. If you are a Reviewer with outstanding reviews from both journals which one will you do first? So, the take away message is to make sure you are sending out enough reminders and don’t worry too much about the exact order of their deployment as long as it is reasonable. And therein lies a whole maze worthy of Capability Brown, full of avenues of research.

 

What Next?

In light of this, if you decide to reduce Days Allowed on some of your journals or alter the number of Reminders sent out, please get in touch. It would be great if we could establish a cause-and-effect together. Reach out in the comments below and I would be pleased to help with data analysis.