Pippa Smart
Pippa Smart
Editor, Learned Publishing

Plagiarism is bad. Readers are duped into believing they are reading something new and may also be misled by manipulation of the content. True authors don’t receive credit, and fraudulent authors receive undeserved credit. It is high-profile academic misconduct and hundreds of blog posts, articles and news items cover the topic. RetractionWatch specializes in reporting bad behavior, and plagiarism comes up as (probably) the worst offender. It even reaches the general news ;for example a 2017 article in The Guardian asks why it was so rife and so pervasive in academia.

 

modernlibrary.jpg

The topic has also been well covered in this blog stream, with articles discussing editor experiences, advice, guidance and top tips for avoiding ethical problems when publishing.

 

As an ethical problem, and one that – theoretically – should be easy to discover, plagiarism checkers have been in place for many years. CrossCheck (now called Similarity Checker), TurnItIn and iThenticate dominate, but journal editors also use other tools, even general search engines to check their submissions. Some journals check everything, others nothing, and many check only occasional articles depending on suspicion and experience.

 

However, beyond the shocking scandals and newsworthy retractions, how bad is the problem for most journals?

 

There are some quantifiable methods for investigating, but they are not always very helpful. Looking at retractions is of some use (see, for example The Top 10 Retractions of 2017) but this doesn’t reveal how much is caught by editors before publication. Using statistics from the checking tools gives some indication of how many submissions are found to have duplication, but this doesn’t tell us if the duplication is plagiarism or valid repetition.

 

So how do we discover the true experiences of plagiarism? It seems that the only way to discover the scale of the problem is to speak directly to editors and editorial offices. Therefore a scoping survey is being run with the support of the European Association of Science Editors (EASE) and Wiley to see if we can discover the true scale of plagiarism.

 

We are reaching out to editors around the world to see if there are differences between disciplines and regions, and to discover if the levels of plagiarism show intended fraud or more minor copying. Once we have an overview we can determine the best way to investigate further.

 

We would love to hear your experiences, so we’re asking all editors to answer the short survey (only 10 minutes maximum) to let us know if you have experienced plagiarized submissions or not (negative feedback is just as useful).  Please do undertake the survey, and also send the link to your own editorial networks – the more information the better!

 

Here’s the survey link.

 

Thanks for helping us learn more about plagiarism!

 

Image Credit:pexels.com