{"objectType":14,"id":2014,"valid":true}
2014
    Richard Gedye
Richard Gedye
Research4Life Publisher Coordinator

In 2002, Wiley became one of the founder partners of  HINARI, a public-private program launched by six leading medical publishers in collaboration with the World Health Organization. Its aim was to bring online access to peer-reviewed biomedical research journals to researchers and physicians in the world’s poorest countries.

Research4lifeTwelve years later, HINARI is just one of four programs in what is now known as the Research4Life  initiative, in which users in more than 7700 institutions in 116 developing countries have free or extremely low-cost access to up to 14,500 journals and 30,000 books from some 185 publisher partners.

Users at registered institutions in the UN’s list of Least Developed Countries have completely free access to the Research4Life content, but these countries also experience some of the greatest barriers to accessing the content which Research4Life makes available to them.  Intermittent electricity supply, shortage of computers, and excruciatingly slow and expensive internet connections are just some of the constraints that researchers in these countries have to contend with.

STM imageSo as part of a continuing program to ensure that in environments of unusually severe access constraints as many conceivable information channels as possible are freed up for users, the International Association of STM Publishers (STM) has invited those of its members who are Research4Life participants to sign up to a Statement on Document Delivery to Qualifying Institutions under the Research4Life Program in United Nations-Designated Least Developed Countries. The current signatory publishers (of which Wiley is one) are, in addition to their direct support for Research4Life, authorizing all their current subscribing library customers in the developed world to provide copies of articles, free of copyright fees, to those educational, hospital,  and academic institutions in Least Developed Countries that are registered members of the Research4Life programs.  So far 17 STM publishers have indicated support for this new statement, representing about 7,800 STM journals.

This statement will be especially relevant for those national libraries like the British Library for whom document delivery is a major area of activity. Such libraries will now be able to supply a significant number of journal articles to researchers in the world’s poorest countries without being obliged to levy the usual publisher copyright fees that would routinely be charged for article supply to their  customers in the world’s wealthier markets.

For some examples of how access to research articles via Research4Life has transformed the lives of researchers, physicians, and patients in developing countries, why not take a look at the case studies outlined in the booklet, Making a Difference. For more information about the Research4Life programs, visit the web site at www.research4life.org or contact me directly at  gedye@stm-assoc.org.

    Anne-Marie Green
Anne-Marie Green
Communication Manager, Wiley
Peter Brantley
Source: Peter Brantley

We recently caught up with Peter Brantley to learn more about Hypothes.is. Peter is the Director of Scholarly Communication at this not-for-profit open source organization seeking to provide annotation services for the web. Previously, he was the Director of the BookServer Project at the Internet Archive, where he developed new business models in distributing digital books and fostered the publication, distribution, and access to digital content based on open formats and standards.

 

Q. What is Hypothes.is and how will it serve the scholarly research community as well as the public at large?
A. Hypothes.is is a small, not for profit organization that seeks to develop an open source software stack enabling the annotation of web documents. These documents are not limited to just text, but can include images, audio or video. The intent behind Hypothes.is is to broaden the conversation about published materials on the web. I think one of the primary audiences for this kind of engagement is the scholarly community.

 

Q. What has the response been to the launch of Hypothes.is from the academic community?
A. There has been a great deal of interest in annotation technology, generally. Hypothes.is is one of many organizations in the academic community pursuing annotation; in some ways we take on a role as a coordinator for a lot of these activities, particularly those that are based on the original software that was developed by the Open Knowledge Foundation called Annotator. What has been most interesting to us, just over this last year, has been a fairly significant burst of interest in the utilization of annotation in a research context. For instance, among the scholarly communication grants The Andrew W. Mellon Foundation is considering for funding this summer, approximately half involve annotation in one guise or another. This is quite striking.

 

Watch the video below to learn more about Hypothes.is.

 

 

 

Q. When do you plan to roll out Hypothes.is and will scholarly research be a starting place?
A.
We anticipate that scholarly activity is definitely going to be one of the leading edge adopters of annotation technology. There is quite a bit of interest in incorporating annotation into various kinds of workflows associated with scholarly objects. There are some very straightforward use cases, where scholars can integrate annotation into post publication discussion, where articles or research objects that have been published can invite commentary either from specific communities or from a concerned general public on their attributes, soliciting additional information or clarification and so forth. We are also seeing quite a bit of interest in prepublication use of annotation, primarily to augment or refactor traditional peer review practice. In this case, we are very sensitive to established customs in how peer review is conducted and to the fact that peer review is usually highly integrated into the heart of the publishing workflow. I think it will take a little bit more time to pursue experiments with publishers and scholarly societies to determine how broader engagement through online annotation might facilitate this kind of critique and commentary. We anticipate being able to roll out a browser extension for annotation that is acceptable for broad consumer use in mid to late summer. But, specific communities that are receptive to annotation such as scholarly, education, journalism, open government, and online education are probably going to adopt software prior to the widespread adoption of a consumer version.

 

Q. Why is the non-profit nature of Hypothes.is important?
A.
We really believe that a not-for-profit status is necessary in order to produce software that can be utilized easily and openly by others. When we examined the possibility of coming out with a for-profit model, which might have made it easier to attract funding in Silicon Valley, we realized it would have frustrated potential adopters and impacted our ability to engage with a wide range of communities. Being able to provide open source software solutions that have input from a wide range of professional areas has been critical for growth and maturation of annotation technology.

 

Q. How will annotation be adopted to serve the needs of different communities?
A.
There are three different paths of adopting annotation. One is fully public annotation where any web browser user can annotate any public content. Certain types of news or social sites might endorse this style of annotation. The second, on the opposite end of the spectrum, allows annotation within fully private communities. For example, a biopharmaceutical company might support internal annotation of clinical trial research and data, with their researchers having access to the annotation of external scholarly literature, yet not wanting to publicly expose internal company proprietary annotations. The third path, where most communities probably sit, falls right in the middle. An organization, such as a scholarly society or a large publishing platform, might incorporate annotation into either their co-publication or prepublication peer review processes, but do it in a community mandated fashion, so if you are a member of that society you have access to participate in an annotation layer that may only be accessible to other community members. This way, an organization can elevate the value of annotation for its membership. Similarly, an online newspaper might support an annotation capability feature that may only be accessible to subscribers. That would not stop public annotation, but would provide incentive for subscription in order to gain access to these value-added layers, which are inherently more embedded within the site.

 

Q. Annotator pseudonymity is also a principal of Hypothes.is. What are the potential advantages and pitfalls of this?
A.
We think that supporting a minimal degree of pseudonymity is important because an association between annotation commentary and a persistent account ID is the only viable, straightforward way to establish reputation metrics. There are other ways of establishing and maintaining reputation over transient IDs but that becomes more difficult and more fragile. The appeal of pseudonymity over real names is the obvious fact that commenting on news or pending legislation might carry professional or personal risk to an individual’s life or property. We want to acknowledge that peoples’ voices have consequences and that there is value and weight to speaking. We are trying to figure out the best ways of safeguarding an individual’s ability to speak while ensuring that there is some evaluation possible and responsibility inherent in the conversation.

 

Q. Do you have plans to partner with publishers or institutions? What will the nature of those partnerships be?
A. We are already partnering with several groups and we’re in conversations with others. We were just awarded a grant from the Alfred P. Sloan foundation to start experimenting with annotation in various peer review contexts with our grant partners: The American Geophysical Union, eLife, and the arXiv repository at Cornell. This grant will also likely lead to exploration with secondary partners as well. We are also hoping to announce another grant this summer that would involve a broader set of societies and be more humanities-oriented. We are definitely pursuing these kinds of engagements and have found there is great interest within the scholarly community in beginning to experiment with how annotation can either resolve difficulties in feedback or enable new forms of commentary and engagement.

 

Thanks for talking with us Peter.

Open Science? Try Good Science.

Posted Apr 2, 2014
    Maryann Martone
Maryann Martone
Professor of Neuroscience

If the Neuroscience Information Framework is any guide, we are certainly in an era of “Openness” in biomedical science. A search of the NIF Registry of tools, databases and projects for biomedical science for “Open” leads to over 700 results, ranging from open access journals, to open data, to open tools. What do we mean by “open”? Well, not closed or, at least, not entirely closed. These open tools are, in fact, covered by a myriad of licenses and other restrictions on their use. But, the general theme is that they are open for at least non-commercial use without fees or undue licensing restrictions.

 

Open Science Share button
Source: pressureUA / Thinkstock

So, is Open Science already here? Not exactly. Open Science is more than a subset of projects that make data available or sharing of software tools, often because they received specific funding to do so. According to Wikipedia, “Open science is the umbrella term of the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.” Despite the wealth of Open platforms, most of the products of science, including, most notably, the data upon which scientific insights rests, remain behind closed doors. While attitudes and regulations are clearly changing, as the latest attempts by PLoS to establish routine sharing of data illustrate (just Google #PLOSfail), we are not there yet.

 

Why are so many pushing for routine sharing of data and a more open platform for conducting science? I became interested in data sharing in the late 1990’s as a microscopist as we started to scale up rate and breadth at which we could acquire microscopic images. Suddenly, due to precision stages and wide field cameras, we were able to image tissue sections at higher resolution over much greater expanses of tissue than before, when we were generally restricted to isolated snapshots or low magnification surveys. I knew that there was far more information within these micrographs and reconstructions than could be analyzed by a single scientist. It seemed a shame that they were not made more widely available. To help provide a platform, we established the Cell Centered Database, which has recently merged with the Cell Image Library. Although we were successful in the CCDB in attracting outside researchers to deposit their data, we were rarely contacted by researchers wanting to deposit their data. most of the time we had to ask, although many would release the data if we did. But I do distinctly remember one researcher saying to me: “I understand how sharing my data helps you, but not me”.

 

True. So in the interest of full disclosure, let me state a few things. I try to practice Open Science, but am not fanatical. I try to publish in open access journals, although I am not immune to the allure of prestigious closed journals. I do blog, make my slides available through Slide Share, and upload pre-prints to Research Gate. But I continue to remain sensitive to the fact that through my informatics work in the Neuroscience Information Framework and my advocacy for transforming scholarly communications through FORCE11 (the Future of Research Communications and e-Scholarship), I am now in a field where: A) I no longer really generate data. I generate ontologies and other information artefacts, and these I share, but not images, traces, sequences, blots, structures; B) I do benefit when others share their data, as I build my research these days on publicly shared data.

 

But do I support Open Science because I am a direct beneficiary of open data and tools? No. I support Open Science because I believe that Open Science = Good Science. To paraphrase Abraham Lincoln: “If I could cure Alzheimer’s disease by making all data open, I would do so; if I could cure Alzheimer’s disease by making all data closed, I would do so.” In other words, if the best way to do science is the current mode: publish findings in high impact journals that only become open access after a year,make sure no one can access or re-use your data, make sure your data and articles are not at all machine-processable, publish under-powered studies with only positive results,allow errors introduced by incorrect data or analyses to stay within the literature for years, then I’m all for it.

 

But, we haven’t cured Alzheimer’s disease or much else in the neurosciences lately. That’s not to say that our current science, based on intense competition and opaque data and methods, has not produced spectacular successes. It surely has. But the current system has also led to some significant failures as well, as the retreat of pharmaceutical companies from neuroscience testifies. Can modernizing and opening up the process of science to humans and machines alike accelerate the pace of discovery? I think we owe the taxpayers, who fund our work in hope of advancing society and improving human health, an honest answer here. Are we doing science as well as it can be done?

 

I don’t believe so. And, as this is a blog and not a research article, I am allowed to state that categorically. I believe that at a minimum, Open Science pushes science towards increased transparency, which, in my view, helps scientists produce better data and helps weed out errors more quickly. I also believe that our current modes of scientific communication are too restrictive, and create too high a barrier for us to make available all of the products of our work, and not just the positive results. At a maximum, I believe that routine sharing of data will help drive biomedical sciences towards increased discovery, not just because we will learn to make data less messy, but because we will learn to make better use of the messy data we have.

 

Many others have written on why scientists are hesitant or outright refuse to share their data and process (see #PLOSfail above) so I don’t need to go into detail here. But at least one class of frequent objections has to do with the potential harm that sharing will do to the researcher who makes data available. A common objection is that others will take advantage of data that you worked hard to obtain before you can reap the full benefits. Others say that there is no benefit to sharing negative results, detailed lab protocols or data, or blogging, saying that it is more productive for them to publish new papers than to spend time making these other products available. Others are afraid that if they make data available that might have errors, their competitors would attack them and their reputations would be tarnished. Some have noted that unlike in the Open Source Software community, where identifying and fixing a bug is considered a compliment, in other areas of scholarship, it is considered an attack.

 

All of these are certainly understandable objections. Our current reward system does not provide much incentive for Open Science, and changing our current culture, as I’ve heard frequently, is hard. Yes it is. But if our current reward system is supporting sub-optimal science, then don’t we as scientists have an obligation to change it? Taxpayers don’t fund us because they care about our career paths. No external forces that I know of support, or even encourage, our current system of promotion and reward: it is driven entirely by research scientists. Scientists run the journals, the peer-review system, the promotion committees, the academic administration, the funding administration, the scientific societies and the training of more scientists. Given that non-scientists are beginning to notice, as evidenced by articles in the Economist (2013) and other non-science venues about lack of reproducibility, perhaps it’s time to start protecting our brand.

 

While many discussions on Open Science have focused on potential harm to scientists who share their data and negative results, I haven’t yet seen discussions on the potential harm that Opaque Science does to scientists. Have we considered the harm that is done to graduate students and young scientists when they spend precious months or years trying to reproduce a result that was perhaps based on faulty data or selective reporting of results? I once heard a heart-breaking story of a promising graduate student who couldn’t reproduce the results of a study published in a high impact journal. His advisor thought the fault was his, and he was almost ready to quit the program. When he was finally encouraged to contact the author, he found that they couldn’t necessarily reproduce the results either. I don’t know whether the student eventually got his degree, but you can imagine the impact such an experience has on young scientists. Beyond my anecdotal example above, we have documented examples where errors in the literature have significant effects on grants awarded or the ability to publish papers that are in disagreement (e.g., Miller, 2006). All of these have a very real human cost to science and scientists.

 

On a positive note, for the first time in my career, since I sipped the Kool Aid back in the early days of the internet, I am seeing real movement by not just a few fringe elements, but by journals, senior scientists, funders and administrators, towards change. It is impossible to take a step without tripping over a reference to Big Data or metadata. Initiatives are underway to create a system of reward around data in the form of data publications and data citations. NIH has just hired Phil Bourne, a leader in the Open Science movement, as Associate Director of Data Science. And, of course, time is on our side, as younger scientists and those entering into science perhaps have different attitudes towards sharing than their older colleagues. Time will also tell whether Open Science = Good Science. If it doesn’t, I promise to be the first to start hoarding my data again and publishing only positive results.

 

References:

Economist, How Science Goes Wrong, Oct 19, 2013

Miller, G. (2006) A scientist’s nightmare: software problem leads to five retractions. Science, 22, 314, pp 1856-1857.

Filter Blog

By date:
By tag: