Fiona Murphy
Fiona Murphy
   Publisher, Life Sciences

We're closing out our data week with an interview with Mark Hahnel, founder of figshare, a Digital Science company which enables researchers to publish their data in a citable, searchable and shareable manner. Also-just an update, the special issue of Learned Publishing on data, covered by LIz Ferguson in "Everybody loves data" is now freely available online. 


Source: Digital Science
Source: Digital Science

Q. Why did you decide to start figshare?

A. To begin with, my Digital Science profile describes me as “Genuinely passionate about open science and the potential it has to revolutionize the research community” I started figshare because I was frustrated by so many aspects of the academic system. It seemed both important and possible to address the opening up of digital content – ultimately the way that the funders and policy makers are going, it will need to be shared at some point anyway. The traditional research ecosystem is already being disrupted by the internet – I want to try and evolve it in a way that doesn’t ruin careers.

Q. What are the implications of open data?

A. Opening up research data has the potential to both save lives (say with medical advances) and to enhance them with socio-economic progress. It’s a space where humans and computers can work symbiotically, and where industry can also benefit. As an aside, we’ve just updated the figshare homepage to show the images of the Dreadnoughtus dinosaur remains – which can also be downloaded and printed off by a 3D printer from figshare. This is something that would have been unthinkable even a short time ago – where paleontologists were unwilling to share their fossils. Now clever content, and developments such as Jean-Claude Bradley’s Open Notebook Science illustrate that it is the right time to move in this direction. Publishers and institutions can both benefit from such developments and figshare is continuing to build new partnerships with learned societies, libraries and data providers. Neelie Kroes of the European Commission has stated how important it is for data to be open and discoverable, and of course the OSTP announcement in the US demonstrates similar momentum.

Q. Practically speaking, what does opening data involve from your point of view?

A. Change seems to occur at different rates among scholarly communities. We’re currently noticing a two-pronged spike, if you like, where early career researchers really get what we’re doing, and embrace the ‘open science’ rationale. Similarly, those who have achieved tenure and become very established in their careers are thinking about legacy and subject heritage, and are similarly inclined to share. However, there’s a middle group – post-docs, lecturers, and so forth – who are very much absorbed by the current system of academic accreditation and who don’t feel able to spare the time or take the risk with their data. It’s this group that would most benefit from changes in the academic reward system.

I’ve got a slide which I call “The Six Steps of Open Access” that describes the stages I think we all have to go through (including funders) in order to maximize the benefits of open science. They are:


    1. Recommend Open Access for Primary Publications


    1. Recommend Open Data


    1. Mandate Open Access for Primary Publications


    1. Mandate Open Data


    1. Enforce Open Access for Primary Publications


    1. Enforce Open Data


Currently, I think we’re in the throes of step 4, but I believe that in 5 years’ time we will be safely through all the steps. By then, the technology to enable policing will be well in place.

Q. How do you see open science progressing?

A. I believe that communication is key in order to progress this. The more emphatically we can demonstrate increased impact to researchers as a result of open science, the more attractive this paradigm will become. At the moment there are too many papers, a tidal wave of research content, and we need more filters and more finely-tuned incentives to encourage the right behaviors. Impact factor was a good measure during the print-age, but it doesn’t enable you to dig into the story behind a piece of research. A paper may have been cited many times because it’s wrong, for instance, whereas altmetrics are increasingly looking to provide nuance around positive vs negative citations and a fuller context; in other words,they capture the ‘digital footprint’ (as coined by Jason Priem in ‘Scholarship: beyond the paper’).  Similarly, computational subjects should be able to credit researchers as much for code as for papers, there should also be a system for differentiating between authors’ contributions to a piece of research (in the same way that movie credits display the roles of various contributors).

At figshare, we’re currently building our institutional offering, in which we are scaling up our partnerships with libraries in particular. The libraries typically provide the critical curation layer, while we build a technology layer on top of that which supports metrics, queries, and other intelligence about the information contained. Libraries are going to be key players in the open science landscape and figshare is looking to empower them to be able to support research communities.

Q. What part can the Research Data Alliance play among all of this movement?

A. It’s generating huge momentum as it operates on a global stage and is really raising awareness about data’s importance. I’m hoping that before too long, anyone who has a niche problem or edge-case query (about, say, personal data, commercial sensitivity, long-tail data, sub-dataset citations issues, etc.) will be able to simply check the RDA website or request a response from RDA and have access to the best-practice response.

Q. What does your crystal ball show five years from now?

A. There is still a lot of work to be done in getting through to the research community, and more carrots need to be in place to encourage them to fully participate. I do feel that in five years or so the landscape will look very different with the scientific paper being more of a “wrapper”, or story, with digital files as the real essence of the material. I think altmetrics will continue to be refined and will become ubiquitous as a research impact measurement. I also feel that peer review will be similarly transformed, to become much more open and portable. I certainly see a place for publishers within this system – particularly regarding the peer review management piece. I find myself wondering whether there will be a proliferation of new journals, in that time, or if we’ll be down to one or two mega-journals. It’s certain that journals will change – we could never had had an elife, PeerJ or F1000R until recently, but I’m just not sure how existing titles are going to adapt in response to the new demands being placed upon them by various stakeholders.

But I am certain that the ‘internet of things’, smart labs, and sensors in everyday life, will be far more integrated into the knowledge canon. I’m not concerned about where content resides – I’m a proponent of the Data FAIRport principles (that data should be “findable, accessible, interoperable, re-usable”). So long as there is a persistent identifier, a sound storage policy, openness and a usable interface, this is real progress as far as I’m concerned. I also don’t believe that humans will be replaced by machines; rather, by building better infrastructures and enabling access and re-use of data, humans will be able to interrogate, clean, and re-purpose data in ways that have not been possible before.

Who's afraid of big data?

Posted Sep 24, 2014
    Jennifer Beal
Jennifer Beal
Events & Ambassador Manager, Wiley

We continue our Data Week with coverage of an insightful "Who's afraid of big data" session from the recent  ALPSP conference.


Source: DrAfter123 / Thinkstock
Source: DrAfter123 / Thinkstock

Ah Big Data, how things have changed!  Ten years ago a data talk at a conference would have been nearly empty, but these days it’s standing room only.  And so kicked off an interesting – but sometimes terrifying – session on big data at the ALPSP conference held earlier this month, chaired by Wiley’s Fiona Murphy, Life Sciences Journal Publisher. The first panelist, Eric T. Meyer from the University of Oxford, gave us a brief history of big data-making the earlier noted comparison in the amount of people interested-and stated that big data was now the “sexiest thing in the room.” He certainly managed to grab the audience’s attention with some examples of how big data is being used, including how a major retailer deduced that a teenage girl was pregnant before her father did based upon her buying habits, how Google’s autocomplete search can be mined to reveal differences in cultural perspectives and that even a general election result can be predicted by Wikipedia page views.  Meyer also highlighted that big data differs by discipline. For instance, while in physics and astronomy big data has been around for years,other disciplines are still experimenting with it.. For those of us not quite sure of when data becomes BIG, Meyer tells us “if it’s easily handled on your laptop, it’s not big data!”

The next speaker, David Kavanagh, Managing Director of Scrazzl, described how Scrazzl was created as a “social discovery platform”  to harness the wisdom of crowds. It uses data to validate research-reliant decisions including choices between experimental protocols and the development of research collaborations. The key challenge is that the scholarly record is not full of big data per se, but big unstructured data which needs to be categorized and standardized to be usable. If you direct computational power at structured data, you can get instant results, but unstructured data needs to be processed first, which can generally only be done by human intervention.  With challenges such as turning 380 million unstructured resource matches in PubMed into actionable points, the Scrazzl team have been working to find ways of making this process as automated, and therefore as cost efficient, as possible.

The final speaker, Paul F. Uhlir from the Board on Research & Information at The National Academies, addressed the potential threats of big data, sometimes painting a bleak picture of the future.  For instance, big data is the ‘fuel’ of driverless cars, which will lead to fewer accidents – but what will happen to the car mechanics and insurance companies?  Will Big Data lead to greater inequality?: a powerful means to solidify existing gaps in political or financial power? With a small numberof companies holding most of the data, Uhlir ponders the implications. What if these companies are able to make stock trades seconds before others?   There is also the challenge of complexity: huge amounts of data give us more information but potentially less knowledge – the more we see, the less we know.  It was not all threats, however.  Uhlir also identified areas of current weakness (and therefore of potential business opportunities) as big data becomes….bigger.  For instance, how should data be reviewed, what are the impacts on traditional metrics, and how do we best share data?  And what of the human side; time and education?

The questions arising in this panel discussion showcased many of the issues which need to be addressed in big data and publishing.  The question of whether researchers should be mandated to give data was answered with ‘it depends’; there are some disciplines where this is more appropriate, but the panel agreed that a default starting point of open data would be the ideal approach.  What can publishers do to support issues around data? Underpinning all suggestions were the following needs: cross-publisher collaboration to establish standard data identifiers and labels which authors would have to use at submission stage, established standards for citing data, and standard categorization of author roles on papers, so that researchers can be credited for all different types of involvement.

Were you at this session or do you have thoughts on data you'd like to share?  Leave us a comment below or tweet @WileyExchanges

    Patricia Garcez 
Patricia Garcez
Neuroscience Researcher 

Below is the final post in our series of winning essays from UK-based Wiley Advisors on “The challenges and opportunities facing ECRs in building an international reputation in the 21st Century”.  Congratulations to Patricia!


Stressed woman
Source: Ingram Publishing / Thinkstock

Young researchers face many challenges in the process of building an international reputation. At the same time, these challenges and opportunities guide them in becoming future leaders.

One of the biggest challenges for the early career researcher is to stand out in a competitive research scenario. With the high number of PhDs and postdocs, and the limited amount of funding and principal investigator positions, it is well known that only a minority of postdocs will become group leaders. In order to progress, it is essential to have an impressive publication record established not long after earning a PhD. Timing is crucial, since most of the career awards and prestigious fellowships are framed within two main parameters: age and impact factor.

In order to progress, planning is vital. Not only does a young researcher have to publish in high impact journals, but he/she also has to do so quickly. Applications to become a group leader can only start once you have succeeded in publishing your research. However, within six years of obtaining your PhD, the number of fellowships to become a group leader decreases substantially. So, the young investigator has to establish a substantial track record of research in a limited amount of time.

Although publishing research in a high impact factor journal is essential, the process still takes a long time. Reviewers can often be overly critical of online supplementary material.  I feel it is the role of Editors to limit the rounds of revisions and to determine when a reviewer is being unreasonable. In practice, young researchers are often reviewers themselves, and they learn to be picky. Sometimes it appears as though it’s unimportant whether the submitted manuscript fits within the aims and scope of the journal. Therefore, the publishing process can be stressful, long and with high risk of rejection.

Despite this, new publishing opportunities can have an impact on a young researcher’s career progression. With open access, publications gain higher visibility and may, consequently, achieve a high number of citations. More importantly, there is now a wider range of publications to choose from. Some journals are using new and innovative processes and transforming the peer-review experience to a more transparent and expedient one. The hope is that these journals can facilitate publication without losing the integrity of the peer-review process and overall quality.

There are many challenges in the 21st century that early career researchers face to establish themselves in science. Diversifying funding and creating more and better publishing opportunities are the best ways to encourage young, talented researchers to pursue scientific careers.

Patricia is a member of the Wiley Advisors program, a group of ECRs and professionals who serve as a voice for their communities. For more information, please visit Wiley Advisors online or on twitter @WileyAdvisors.

    Phil Wright
Phil Wright
Senior Marketing Manager, Author Marketing, Wiley

A few months ago, Wiley rolled out the Altmetric service across all journals on Wiley Online Library. A common query we receive from our authors is ‘how can we get involved?’ This post outlines five simple things that you (as authors) can do to help improve your article’s Altmetric score. But first, let me address two fundamental questions.

How is the score calculated? The score is based on the quantitative measure of the attention that a scholarly article has received, and is derived from 3 main factors (volume, sources and authors). News and blog mentions are weighted most heavily, but in short, the more people who mention, and interact, with your article across multiple respected sources, the more likely your score will increase.

So what’s a good score? According to Altmetric, a mid-tier publication might expect around a third of its published papers to be mentioned at least once, with the rate dropping quickly for smaller, specialized publications – but the exact proportion will vary by journal. On this basis many articles will have a score of zero.

Five things you can do to help improve your article’s Altmetric score

Before promoting your article, we recommend that you create (and subsequently use for various promotional efforts) a short summary of your work. This summary should include your key research outcomes (i.e. we did this study on X and got Y results) as well as links to useful additional resources such as videos. This then allows a wider audience to understand and appreciate your research.

1. If you run a blog, add a post about your article. If you have a contact who runs a blog, ask them to help promote your work.

2. Tweet about your paper – either through any existing accounts that you manage or through any society/institutional accounts.

3. Contact your institutional press office to see if your article is relevant for any publicity opportunities.

4. Talk about your paper at your next conference and personally raise awareness of your research within your own community.

5. Create an account with Mendeley and share your work with thousands of fellow academics.

For more information on promoting your article, take a look at Wiley’s Journal Author Promotional Toolkit.

If you click on the Altmetric score icon  Altmetric Score icon for your article on Wiley Online Library (above ‘Additional Information’), you can see your individual article stats. This allows you to track your efforts and compare your paper with other articles in the journal.

Take a look at a real example.

This article Enhanced repertoire of brain dynamical statesAltmetrics Graphicduring the psychedelic experience has a score of 256 (as of September). The score is derived from mentions across 19 news outlets, 4 blogs, 95 Twitter accounts, 11 Facebook pages, 4 Google+ pages and 3 Reddit accounts, as well as a YouTube video. So this gives you a sense of all the different elements that contribute to the Altmetric score.

So try out some of these ideas and let us know what works. Leave a comment below or tweet us your thoughts @WileyExchanges. For more information, visit our Altmetric FAQs page.

    Phil Wright
Phil Wright
Senior Marketing Manager, Author Marketing, Wiley

So you have published your research, your publisher is working hard to market your work, but how can you give your paper that extra push? How can you maximize the impact of your published research?

Listening to our authors, we hear three key consistent requirements:

1. Authors want to publish their work quickly

2. Authors want their work to be discovered, and

3. Authors need more support complying with funder mandates

To address the second point (Authors want their work to be discovered), we developed Wiley’s Journal Author Promotional Toolkit. This focuses on maximizing article discoverability. Wiley works hard to ensure visibility of the work published by our authors – we also want to help our authors to promote their work themselves.

“There is a need for a centralized source of useful information concerning online techniques and strategies to maximize the visibility and impact of published content.” Professor Christine Bond, Editor, International Journal of Pharmacy Practice.

What exactly does the promotional toolkit cover?

The toolkit is structured by promotional tool making it easy for authors to identify the different activities they can undertake. It covers Search Engine Optimization (SEO), Conferences, Publicity, Social Media, The Wider Web, Multimedia and Email. Within each section, there is guidance and tips for things to try out. Furthermore, the sections include links to the latest (relevant) blog posts from the Wiley Author (Blog) Series – what you are currently reading now.

HSJ-14-66965_MAUT - JAPT_1 pager_WEB READY

How can you use the promotional toolkit?

We purposely chose the name ‘promotional toolkit’ to illustrate the concept that this is a ‘kit’ of valuable information, which is structured by promotional tool. We envision that different parts of the toolkit will appeal to different authors at different times – depending on what your particular needs are. So it’s ready with all the key information you need, when you need it. We recommend that authors use as many of the promotional tools as possible – to maximize the likely impact – but we also appreciate that everyone has their own level of comfort and available time. So focus on what works for you.

How up to date is the promotional toolkit?

To really make this a valuable resource for authors it needs to be updated regularly, and that’s our goal. For example, we recently released the blog post How to promote your research through blogging so we added this to ‘The Wider Web’ section. So whether you need guidance on using social media, tips around search engine optimization, or want to learn more about what really appeals to journalists, add the Journal Author Promotional Toolkit to your list of web favorites.

How can you help shape the toolkit’s future?

We want to make the promotional toolkit as relevant and useful as possible. So let us know what you think. How can we improve this? What would you change? How likely would you be to use the toolkit to promote your own research?

    Jonathan Foster 
Jonathan Foster
PhD, Materials Science, University of Cambridge 

Today's post is the prizewinning essay from a recent contest open to UK-based Wiley Advisors.  Advisors were invited to submit an essay on  "The challenges and opportunities facing ECRs in building an international reputation in the 21st Century".  Congratulations to Jonathan Foster who has won an all-expenses-paid trip to Frankfurt to participate on a ECRs panel at The Annual Frankfurt Conference of the International Association of Scientific, Technical & Medical Publishers (STM)!  And stay tuned here for essays from the runners-up! 


apple tree
Source: kviktor01 / Thinkstock

Isaac Newton was 24 when he developed Calculus; Charles Darwin was 22 when he boarded the Beagle; and William Bragg was 25 when he was awarded the Nobel Prize. Many of the greatest scientific breakthroughs have been made by early career researchers (ECRs) and the image of the lone genius remains strong in popular culture. However, the nature of scientific progress is changing and breakthroughs are increasingly made by large, multidisciplinary teams using expensive equipment over many years. The work is still undertaken by ECRs , but often the credit goes to the established academics with the reputations and the resources required to build such teams and to stay in one institution long enough to see the fruits of their labor. Moreover, the framework by which the next generation of academics is selected is still principally based on the publication of first-author papers by ECRs hired on short-term contracts to work on projects directed by others.

If funding bodies want the best researchers to undertake high risk, high reward research, they need to provide a funding regime which allows ECRs, as well as their PI’s, to take risks. A typical post-doc position lasts two years, which is a relatively short period in which to move to a new institution, learn new skills, build collaborations and make a significant contribution to a field. However, the reality is that in order to secure their next position, researchers often have to apply for funding up to a year in advance, which leaves even less time to get papers published. This is particularly problematic when applying for fellowships which have fixed deadlines and often do not recognize papers in preparation or under review. ECRs must ‘publish or perish’ which favors short term, well established projects and can mean settling for lower quality publications by publishing too soon. The importance placed on first-author publications is also highly damaging to collaborations as researchers are discouraged from spending significant amounts of time contributing to ‘other people’s’ papers or leaving work for the next person to build on. Funding longer-term positions and reducing grant and fellowship application times would allow ECRs to focus on producing research for longer before they have to worry about applying for their next position. Journals can also play a role by speeding up publication times and by giving prominence to joint first-author status and author contribution sections in publications to reward collaboration.

Building an international research profile in the 21st century has never been easier thanks to cheap flights, instant communication and the internet. However, the well-trodden academic career path of a previous generation: a PhD followed by a post-doc in the US before being appointed as lecturer, has been replaced with a seemingly endless cycle of moving and temporary contracts. The high number of students graduating with PhDs compared to the small number of permanent academic posts available results in ferocious competition for positions. While healthy competition is essential for ensuring that the best researchers are funded, the process can often feel like natural selection by endurance rather than by ability. The requirement to live on short-term contracts and to be globally mobile into your mid-thirties excludes many otherwise brilliant researchers. The Equality Challenge Unit points to factors such as these to explain the discrepancy between the high numbers of female students graduating with PhDs relative to the proportion of tenured female academics. Funding bodies and institutions need to recognize that expecting researchers to uproot themselves every two years in order to establish a global reputation is as outdated as the image of the lone genius sitting under an apple tree.

    Jason Karl
Jason Karl
Creator, JournalMap

Pick up your smartphone and ask it where the nearest place is to buy a cup of coffee. You will be presented with a map showing all of the nearby coffee shops. Choose one and your phone displays walking directions on a map with an arrow guiding your path. It’s amazing and getting better every day. This use of geographically specific information is now so commonplace that we take it for granted. Yet, not all information is so easily searched geographically – particularly the search for scholarly research.

Harnessing the Geography of Science

Access to scholarly literature has improved dramatically in recent years, and it is easier than ever to search across disciplines and sources to find useful references. However, the ability to find what is known about a specific place is still hindered by search technologies that primarily rely on keyword, topic, text, and author searching – concepts of cataloging and searching for published information that date back to the late 1800’s.

The problem is that keyword searches are an inadequate way to find relevant research for specific places. Consider the example of finding research studies from the Chihuahuan Desert of the southwestern U.S. We mapped the locations of more than 800 journal articles that were returned from a Web of Science™ search on “Chihuahuan Desert” (Figure 1a).  Only a third of the search results were actually from our target area. Additionally, our keyword literature search missed any study in the area that didn’t use those two terms (Figure 1b).


Journalmap figure 1
Figure 1. The results of a Web of Knowledge search for articles with the keywords “Chihuahuan Desert” were mapped by where each study was conducted. Only 33% of the articles occurred in the Chihuahuan Desert region (a). Additionally, the keyword search missed many articles in the region that did not use those terms (b).


We believe that adding map-based tools to searching for scientific literature based on the locations where studies were actually conducted can be a powerful new approach to finding relevant research - not only for an area of interest, but also for other similar areas. The importance of being able to search by location or on attributes that can be derived from a study’s location is most obvious for the natural sciences, but is equally important for many other fields such as epidemiology, microeconomics, and linguistics.

The ability to search for scientific literature geographically and thematically stands to not only improve the ease and accessibility of relevant research findings but also promote syntheses and meta-analyses, provide greater understanding of processes that create environmental patterns, enable evaluations of knowledge biases, and limit redundancy in conducting new studies.

The idea of mapping scholarly literature itself is not new. There are examples of geotagged bibliographies from many fields including natural resources and human infectious disease. Some publishers are now even providing interactive maps for articles accessed online (but generally lack geographic searching or the ability to view locations of multiple articles simultaneously). But, these efforts have been largely ad hoc with no way to search for literature by location, share, or analyze records outside of their systems, or integrate with other spatial databases that could be used to attribute the articles. This led us to create JournalMap.

A JournalMap for Scholarly Literature Discovery

JournalMap is a map-based scientific literature database and search engine (Figure 2). JournalMap uses study area descriptions from an article (not author affiliations) to map where the research was actually conducted. All articles in JournalMap are geotagged, either automatically using pattern recognition algorithms looking for geographic coordinates or manually from text-based descriptions. Each article record in JournalMap links directly to the publisher via the article’s DOI.


Journalmap Figure 2
Figure 2. JournalMap is a map-based scientific literature database and search engine that geotags journal articles by the locations where studies were conducted.

JournalMap makes it easy to search for literature from specific places through a simple map interface. Results of JournalMap searches can be exported in different formats or saved as a collection with a unique URL for a spatial bibliography on a topic, to showcase a set of articles from a journal, or for a georeferenced CV of an author’s papers (Figure 3).


Figure 3. JournalMap collections can be used for creating geotagged bibliographies for a topic or location (left) or to highlight articles by an author (right). Each collection has a unique, stable URL for easy sharing.
Figure 3. JournalMap collections can be used for creating geotagged bibliographies for a topic or location (left) or to highlight articles by an author (right). Each collection has a unique, stable URL for easy sharing.

For many topics, though, there has been little research done in many parts of the world. But, research conducted in areas with similar physical, environmental, cultural or political contexts can, in many cases, be relevant to these understudies regions. JournalMap includes search filters for similarity-based searching with layers such as climate, soils, elevation and land cover. We are adding additional filters for attributes like population density, culture, language and infrastructure.

JournalMap’s content comes from several sources. For working with publishers, JournalMap can directly import article XML in industry-standard NLM and JATS formats. Authors and researchers can also contribute articles on their own directly through the JournalMap website – needing only the article DOI (or citation information) and study locations.

Challenges and Opportunities

Our goal with JournalMap is to make geographic-based literature searching commonplace in scientific research. Yet, the information about where a study was conducted is locked up in the text and figures of scholarly articles, and in myriad forms that are not easily searchable.

To make our goal a reality, three challenges need to be addressed.  The first is developing ways to more efficiently geotag previously-published articles. This is a large and complex task, but can be addressed through a combination of approaches like intelligent text-mining and crowdsourcing. The second challenge is achieving standards for reporting locations in scholarly literature, capturing that location information at article submission, and encoding it in a machine-readable format in article metadata. The final challenge is developing robust and interconnected geographic search and analysis tools for scientific literature.

JournalMap is working on all three of these fronts, but our efforts alone will not be sufficient to get the job done. We would like to invite the publishing industry as well as journal editorial boards to work with us to develop and implement standards for better location reporting and encoding in articles. Additionally, we encourage authors and researchers to visit JournalMap.org and contribute articles to help grow the database and prove the concept. We believe the time is right for the idea of geographic-based science discovery to take hold and transform scientific knowledge discovery and application.

    Verity Warne 
Verity Warne
Senior Marketing Manager, Author Marketing, Wiley 
Source: Federico Caputo / Thinkstock
Source: Federico Caputo / Thinkstock

Over the past few months, we’ve frequently found ourselves sitting on the other side of the desk -learning directly from researchers about what they need most from publishers - at events such as this ALPSP Seminar, Sense about Science workshops, SSP and through our own UX testing.   Below are some of the strongest messages that are coming across:

1. If I have to remember one more password…Researchers are busy people. Time and resources are tight and getting tighter, and the potential for information saturation is a factor that accelerates with every new technological breakthrough. It’s not enough to deliver the technology; publishers need to ensure that the technology represents tools that researchers can and will use to increase the impact of their research.

2. Article vs journal?  As a research tool, altmetrics are embraced as a means to assess the quality of articles/research material outside of the source journal. But, when it comes to publishing the outcomes of their work, researchers still use journal impact as a primary deciding factor. Developing alternate ways to demonstrate research impact, beyond the IF, is important in today’s cutthroat competitive grant and tenure processes – however, for altmetrics to provide researchers with a tangible edge, they’ll need to advance beyond their current squishy correlational state.

3. Social review. Social media is more than just another route to content. As well as a tool for sharing and collaborating on research, it is increasingly used as a platform for post-publication peer review.  Researchers want social media platforms incorporated at the article level.

4. Getting acquainted.  Researchers are social creatures – it’s not just the article they engage with, but the author/researcher who produced it. They may read the article on a publisher platform, but will then contact the author directly (via PubMed/ResearchGate/social media/email). Can publishers facilitate this?

5. A better user experience. Researchers want publishers to continue developing ways to deliver content in a more user-driven way, to ensure the best possible experience for research, reading and publishing. This means easy, fast and efficient access to relevant content. They want to find the right stuff quickly and then access it immediately, ideally from one place.

6. Open the black box.  Researchers want more communication more frequently during the publishing process.

The ongoing challenge – and opportunity - for publishers is to make life easier for researchers.  But – looping back to #1 - when developing new products or workflows, it’s worth remembering these questions, as framed by Figshare’s Mark Hahnel at an ALPSP presentation earlier this year, "The key questions researchers ask are: What’s in it for me? Is it hard to use? Will it take up my time? Do I have to do it?”

Filter Blog

By date:
By tag: