This was the question posed by Catriona Fennell as we sat in the banks of the Willow lecture theatre at Oxford Brookes University. 2018 has been a tumultuous year so far – fake news, the Cambridge Analytica scandal, the list goes on – and now more than ever we really need an answer to this question, not just to protect our authors, partners and integrity, but to preserve our faith in all kinds of published content. In this vein, myself and around 40 other scholarly communications enthusiasts sat ready to thresh out the question with three key advocates of the issue: Pippa Smart, Editor-in-Chief of Learned Publishing, Catriona Fennell, Elsevier’s Director of Publishing Services, and Chris Graf, Director of Research Integrity and Publishing Ethics at Wiley.
Science and the Quest for Truth
Like Catriona, I’d consider myself a science fan nerd. I believe in its power to search for truth. And yet, based on Catriona’s experience, a typical director of publishing services will find ~200 retraction emails per year in their inbox, suggesting that scholarly research is not in fact as rigorous and trustworthy as we thought. Worse, the consequences can be cumulative: too much fraudulent research makes research a fraudulent endeavor and makes science untrustworthy. Why? How do we explain this conundrum, this tension of opposites?
Human Fallibility and Accountability
For Catriona, the answer is in fact quite simple. We are humans. We make mistakes. In reality, retractions make up only a very small proportion (around 0.1%) of the world’s research output annually. The trouble is that humans are inherently creative, and wherever there is a pen (or a keyboard), our narrative voice slips through. Of course, researchers’ primary concern is with method, rigor and evidence, but to get attention there is also the need for a hook – a story, particularly when publication contributes to career development as well as to a quest for understanding. Researchers shouldn’t feel bad about this, it’s the state of play in all fields of work and it is natural to be concerned with one’s career. It’s our job as publishers to keep the narrative in realistic proportion.
The trick, Chris elucidated, is for researchers to feel comfortable in being honest when things go wrong, and correct or retract. This shouldn’t be seen as a bad thing, but as a genuine and necessary contribution to the scientific record. This might make a researcher laugh in equal parts amusement and fear, but in fact we have some real success stories of groups retracting work with careers and reputations left intact. As Chris put it, “being a brilliant scientist isn’t just about being top of the class, it’s about doing it properly.” Our responsibility here, as part of the scholarly communications community, is to help shift the perception and empower researchers to embrace retractions. When a researcher retracts an article, this could connote honesty and reliability rather than fraud and obscurity, and a clear explanation of the underlying reasons in retraction notices is the key to highlighting the difference. As Chris advocates, our focus in resolving the quality control crisis should be about integrity first, and impact last. We can’t control the behavior of “free-riders”, but as a collective community we can and do put systems and best practices in place to minimize their effects on the scientific record. As the Committee on Publication Ethics (COPE) describe it, this is a “wicked problem”, one with many moving parts and conflicting agendas. In this light, the idea of a collective community and common ground is even more important in finding a way to ensure our content is trusted.
This brings us to integrity. Catriona made the point that integrity is one of three terms often used to define “expertise”. The expectation is that peer review (both the process and the systems that facilitate it) will safeguard the integrity of a research article, but again the fact remains true that retractions occur and predatory journals exist. “You didn’t have to prove it so intensely back in the day” argued Catriona; this suggests to me that technology, combined with population growth, have complicated our trust in scientific articles. How can I be sure this has been peer reviewed by genuine experts, that this is a reliable journal, that these images haven’t been improperly manipulated? The tech revolution may have provided a new platform for the manipulation of research data, but it has also ushered in an age of openness, transparency and a potential for systems that can help re-establish integrity as the primary trait of the scientific record (if it was ever truly lost). There are initiatives already addressing this, such as the Peer Review Blockchain Initiative (read more in a recent interview published on The Scholarly Kitchen). Dr Wen Hwa Lee and his team at the Structural Genomics Consortium are championing extreme open science. As a result, the impact on rates of research into new drugs and of their successful clinical trials has been astounding, making a powerful argument for embracing open collaboration (learn more about their work by watching Dr Lee’s recent talk at Wiley here).
At the same time, all of us sitting in the Willow bank were likely employed by Western, well-established publishers, with a common understanding of our expectations. But as Pippa highlighted, this is not the case everywhere. It is both a problem and a great opportunity that science is a global endeavor, and Pippa wants to see us embrace the latter. International collaboration in any field or industry can cause tension and difficulties, but it can also create wonderful synergies and innovations. Again, the notion that this is a wicked problem, one convoluted by the labyrinth of people and steps in the process of publishing a research article, was at the heart of Pippa’s argument. Yes, readers need to trust authors – but they need to be able to trust those in the middle too. There are many reasons not to trust those in the middle – commercial publishers need to make profit to pay staff and shareholders, universities have a vested interest in showcasing their institution, reviewers may be influenced by affiliation with a high-profile institute, and editors may be inclined to accept articles primarily for the promise of citations.
Looking at Publishing Ethics Through a Circumstantial Lens
All of this might seem somewhat cynical and in contrast to my earlier optimism (or naivete) but we’re all subject to the influence of decision-makers at times. We all know that there are things to be cynical about in research: long-running jokes about professors adding their name to the author list of every paper coming out of their department, spin editors employed to reduce hyperbole, acceptance-accreditation-remuneration ploys, and hard-earned impact factors and reputations of esteemed journals being exploited by predatory journals. But with a global perspective, some behaviors considered plagiaristic in Western publishing have altogether different causes. The proverbial “copy and paste” will typically result in instant rejection by Western journals, but if English is your second or even seventh language there are other reasons, beyond “stealing”, to use this computer and language shortcut. In some countries, Pippa reminds us, it is considered impolite and disrespectful to cite someone; it suggests that the referred work is not well-known in its own right. We’ve already touched on the influence of career development over researcher behavior and it’s clear that some cheat because they are under such pressures. But for researchers in some parts of the world, the pressure comes from not only the desire to advance, but to merely keep your current job.
So there are some relatively easy and intuitive things we can be aware of as we figure out this wicked problem, such as the nature of one’s research field. A small field might have a number of researchers publishing excellent research, but there aren’t enough reviewers available or with enough expertise to recognize the nuances and significance of the work. In this case we can use technology and collaboration to build better reviewer pools and tools. For the early career engineering researcher who is concerned about peer review reports, we can find ways to make these publicly available. Then there are some more pressing things to keep top-of-mind in any discussion of integrity in scholarly publishing – that expectations of best practice can depend on the discipline and individual’s post, but also significantly on location and personal background. We need to raise awareness of these circumstances.
Finally, in the bigger picture, there are sometimes sociopolitical contexts that need to be considered. A question arose about censorship post-publication. Censorship itself conveys a lack of trust between power and people and inevitably it has found its way into the wicked problem we face in scholarly communication. Consider Cambridge University Press, which received criticism over censorship of China Quarterly. The publication was censored in China at the request of the Beijing government and subsequently reversed after international protest. Both actions resulted in publisher criticism. An audience member asked if this was an isolated incident or if other publishers have experienced this kind of pressure. Springer certainly have, and Catriona agreed that it’s a case of “when”, not “if”. As publishers she suggests that the best thing we can do is work with the Intellectual Property Office (IPO), being clear and concise about our positions now (whatever they are) so that any future action needed around censorship will be uncontroversial. Other examples include the requested censorship of names of researchers who had become political prisoners in Turkey and with whom institutions no longer wished to be associated. Another case involved a Wiley toxicology journal publishing content about terrorism – in the aftermath of 9/11, the decision-makers had to try to view the consequences of publication without bias. There are even examples of author pseudonyms requested to protect incriminating interests. The onus is often on the publisher to make an ethical judgement on whether to publish the article.
Clearly there is a lot to say, and even more to do, to improve trust and integrity in scholarly communications. Our panelists did a great job of spotlighting some of the most important factors in this wicked problem, but there is no simple roadmap to resolution. Solving it might feel like a large and onerous task for any one person but as Chris said, “although there is only one of me or you, there are thousands of us”.