{"objectType":14,"id":2014,"valid":true}
2016

Why You Need an ORCID ID Now

Posted Dec 1, 2016
    Kelly Neubeiser
Kelly Neubeiser
Author Marketing, Wiley

id.png

Wiley recently signed ORCID’s Open Letter, becoming the first major publisher to require ORCID iDs as a condition for submission.

 

By now, most of you are probably familiar with ORCID, a nonprofit providing unique and persistent digital identifiers to all who participate in research, scholarship and innovation. Why is using an ORCID iD a good idea? For one, it easily connects you to your achievements and contributions, meaning no one else will get credit for your work. Second, it saves times: you can elect to have your ORCID record updated automatically each time you publish an article.

 

To help explain the widespread benefits of ORCID, we asked the experts: Alice Meadows, Director of Community Engagement & Support for ORCID; Edward Wates, Vice President of Content Management at Wiley, and Roger Watson, Professor of Nursing in the Faculty of Health and Social Care at the University of Hull and Editor-in-Chief of the Journal of Advanced Nursing.

 

Q: Can you tell us a bit about your experience with ORCID?

 

AM: I joined ORCID in May 2015 as Director of Communications and moved into my current role as Director of Community Engagement & Support in January 2016. It’s been an exciting time for the organization, with ORCID adoption and usage growing rapidly - thanks, in part, to initiatives like the ORCID open letter that Wiley has just signed.

 

EW: Since registering for an ORCID, I have published in several Wiley journals. I was delighted when the auto-update functionality was launched in October 2015 and my ORCID record was automatically updated with my publication details. This has saved me the effort of manually updating my record and ensures my publication record is kept up to date. I also serve as a Board member and Treasurer for ORCID, and have been highly impressed by the quality of the team and the progress we are making towards ensuring that ORCID is the default standard for author disambiguation.

 

RW: My experience of working with ORCID comes mainly as an author – both as someone with an ORCID record and as someone submitting to journals. As an editor-in-chief I support and encourage the use of ORCID for all manuscripts to my journal.

 

Q: What do you think are the three most important benefits ORCID provides to researchers?

 

AM: First, being able to uniquely and authoritatively connect themselves with their research contributions and affiliations irrespective of how common their name is or how many variations of it they use professionally. This makes their work more discoverable and helps ensure they get proper recognition for it. Second, having control over their ORCID record – they create their own iD, decide what to connect to it, who they allow to read and/or update their record, and which information is made publicly available, shared with trusted parties, or kept completely private. And third, saving time and reducing errors by connecting information to their record once and then having it flow into the other systems and workflows they use. Auto-update (offered by Crossref and DataCite) is especially important here as it enables researchers to opt for their ORCID record to be automatically updated every time they use their ORCID iD during the publication process.

 

EW: Without a doubt the ability to automatically update an author’s publication record is the main benefit at present, but future developments will also make it easier to submit articles for publication as well as enabling authors more easily to meet their funder’s requirements.

 

RW: First, it provides an unambiguous identity in the online environment to which career, publications and research grants can be linked. Second, it’s a webpage that can be made publicly accessible to promote your work, and also a means whereby the authenticity of your work may be verified. Third, it’s a ready-made way of uploading your details to a journal webpage in the process of submission and then linking new publications to your ORCID page.

 

Q: What issues does the use of ORCID iDs within your organization solve for researchers and authors?

 

AM: Our focus is on enabling other organizations to use ORCID to help their researchers and authors.

 

EW: At present there are too many manual requirements for the submission process as well as multiple log-ins to our systems. An ORCID provides robust identification for individual researchers and authors, which in turn will allow us to develop the tools to simplify the submission and publication process.

 

RW: If ORCID is used universally it is one less piece of information that has to be presented at appraisals and promotions. For example, simply provide your ORCID number to you line manager or the promotions committee.

 

Q: What is the most common concern you’ve heard when it comes to ORCID and how do you address it?

 

AM: Probably the most common concern we hear from researchers is that they don’t want to have to create and maintain “yet another identifier/profile system”. But in fact their ORCID iD works across multiple systems and can be connected with different identifiers. So researchers can connect their ORCID iD to the profiles they’ve already created in systems like Kudos, Mendeley, Scopus, Web of Science, and in research information management systems such as Converis, Pure, Symplectic Elements, and Vivo. So instead of being a burden, ORCID actually saves researchers time by enabling that information to flow between these different systems with minimal effort on their part.

 

EW: Some people express concerns about a surveillance culture and the potential misuse of confidential information. This is emphatically not the case for ORCID as authors and researchers define the privacy settings of their own ORCID record data. Trust is one of the key principles of the organization, and ORCID has put in place a number of controls, policies and practices to demonstrate its commitments in this area.

 

RW: Having to administer and curate another webpage is the common concern but emphasizing the overwhelming and common sense advantages of ORCID usually convinces people; also, once it is set up and you begin to use your ORCID iD when submitting manuscripts, if they are published you are prompted to link it to your ORCID page.

 

Q. If you could tell our readers one thing about ORCID, what would it be?

 

AM: Registering for an ORCID iD is very quick and easy, but that’s just the first step. So once you have an iD, please be sure to use it whenever you’re prompted to do so – by your association, funder, publisher, research institution, or any other organization.  The more you use it, the more valuable it will be to you, your organizations, and the wider research community.

 

EW: The widespread adoption by the community is essential if we are to alleviate the burden for authors of re-entering publication data into multiple systems. Many of the benefits for authors will be delivered by third-party integrations based around an ORCID API. These will be developed increasingly as ORCID gains more widespread appeal and traction.

 

RW: It’s free, it’s linked to ResearcherID, and gives you a cool QS code to put on your CV (OK, that’s three things, but ORCID is really good!)

 

For more information on ORCID, visit our website.

 

    Hannah Wakley
Hannah Wakley
Senior Production/Managing Editor
Iris Poesse
Iris Poesse
Associate Managing Editor
Noel McGlinchey
Noel McGlinchey
Senior Editorial Assistant

WP_20161101_09_04_24_Pro (2).jpgIntroduction

 

At the end of October, sixty-five people with an interest in peer review gathered in Brussels for the European meeting of the International Society of Managing and Technical Editors (ISTME). The theme of the two-day event was ‘Becoming more open’ but there were presentations and workshops covering many aspects of peer review management and new trends in scholarly publishing.  With so many publishing professionals in the same place, there were also a lot of interesting conversations over tea and coffee between sessions.

 

There were presentations on topics that have been subject to great discussion and debate in recent years such as Copyright Reform and Open Access. However, there were three areas which seemed to have shown greatest recent progress which are outlined below.

 

Open peer review

 

Opening up processes was a key topic of discussion in multiple sessions, whether it was a plea for recognizing that peer review is not the same for all journals, opening up editorial office reporting, or analysing the benefits of open peer review.

 

In an open peer review panel, Phil Hurst, Publisher from the Royal Society, explained that open peer review translates not only into transparency, but into better reviews and reviewer recognition. Adrian Aldcroft, BMJ Open Editor, added that currently bias is a main disruptor in the review process, and while you would intuitively say ‘go double blind’, that is not the answer. Double blind review can contain hidden bias, and at least with open peer review any bias is upfront. In his experience, open peer review leads to higher acceptance rates due to more constructive reviews, so we can publish more. Dominika Bijos, Research Associate, sees open peer review as part of educating the next generation about how to judge science. It increases the awareness of what peer review means, how it works and what it is. For young researchers, providing good reviews gets you known with the editors. But they need guidance too; opening peer review shows the effect reviews have and the value of a reviewer’s contribution.

 

Open peer review trials done by BioMed Central showed that reviews returned are more substantiated with evidence and the number of submissions grew. However, the number of reviewers who agree to review goes down. Further disadvantages are that reviews take longer and that young researchers might be uncomfortable providing (critical) reviews of senior researchers. There is also scepticism from editors to be overcome, but open peer review doesn’t question their integrity, nor does it break any agreement of confidentiality.

 

Opening peer review is a good way forward, but although there seem to be no major adverse effects, it is not a simple switch. However, there is no need to jump to open peer review in one go; there are levels of open peer review, baby steps or bigger strides to take. It is worthwhile considering whether it is more important to know who reviewed a paper, or if is it more important that the content is available.

 

Editorial Office reporting

 

As part of ISMTE’s mission to provide education and training for those working in editorial offices, the Society’s Education Committee has been working with a statistician to develop best practice guidelines for reporting editorial office data. They are now available to members on the ISMTE website and attendees of the European meeting were given a sneak preview. Jason Roberts, from Origin Editorial and the founding President of ISMTE, explained that there is currently no standardization in peer review management reporting and little consensus on what parameters to use. The new resource details five essential reports for editorial board meetings and provides templates to help editorial staff report, analyse and present data in more meaningful and accurate ways. He talked through the most appropriate graphs to use for presenting numbers of submissions, the importance of giving three years of data to allow for comparisons, and the advantages of reporting turnaround times with a median and a coefficient of variation rather than a simple mean. Those who work in editorial offices are not always as mathematically literate as the editorial boards they report to, so this resource will be fantastically useful. We all left the workshop with the intention of revising our reporting templates!

 

PEERE

 

This is a bottom-up network of scientists and stakeholders who are collaborating in an EU funded research programme with the aim of improving journal and grant peer review by building evidence-based models of best practice, implementing initiatives and improving collaboration between researchers studying peer review. The chair of PEERE, Professor Flaminio Squazzoni, was interviewed by Exchanges in 2015, and at the ISMTE conference Marco Seeber of Ghent University gave an overview of progress.

 

The most important piece of news was that data sharing arrangements have been agreed between major stakeholders, i.e., publishers. Data will be fully anonymised in compliance with all relevant legislation, but there will be sufficient information to help PEERE researchers discover inter- and intra-community links and explore the patterns of behaviour and incentives which motivate best practice. PEERE is also asking specific questions such as:  ‘What is the effect of having more people review a paper?’; ‘What is the effect of double blind versus single blind peer review?; ‘How do junior researchers compare with experienced researchers as reviewers?’

 

Some trends are already emerging. For example, there appears to be little benefit in assigning more reviewers unless an unfair review is returned or a reviewer is late. This seems like a good argument for aiming to have that extra reviewer in the first place. In another study it emerged that a particular conference tended to display fewer posters from authors who were unknown to them even when they were experienced authors in other areas.

PEERE seems destined to provide extremely useful data for raising the fairness of peer review as well as its efficiency.

 

Conclusion

 

Like any conference, the most important aspect was the connections formed with colleagues both old and new as well as the exposure to new perspectives from fellow professionals working in related areas of peer review. However, EO Reporting was certainly the most useful hands-on part of the conference, open peer review perhaps the most open to debate and PEERE the area with greatest evolutionary potential.

 

Authors: Hannah Wakley (Senior Production/Managing Editor), Noel McGlinchey (Senior Editorial Assistant) and Iris Poesse (Associate Managing Editor).

 

Image Credit: Hannah Wakley

 

    Anna Ehler
Anna Ehler
Society Marketing

It’s not only governments that are having an impact on the accessibility of research. In a follow up to our last episode, How Governments Are Driving Open Science Around the World, this installment of the Wiley Society Updates podcast delves into some of the nonprofit and technology organizations helping to make science more open.

 

Listen to the episode below to hear from Liz Ferguson, Wiley’s VP of Publishing Development, and Ruth Bottomley, Senior Program Manager for Research Development and Support at the International Network for the Availability of Scientific Publications (INASP).

 

 

And, here’s where you can catch the previous episode: How Governments Are Driving Open Science Around the World.

GettyImages-172251930-lab scientist.jpg

For all episodes in the series– including a review of library funding trends and transformations in publishing technology –visit iTunes and subscribe to the Wiley Society Updates podcast.

 

    Samantha Green
Samantha Green
Society Marketing, Wiley

A survey captures opinions and feelings at a specific point in time. Like a photograph, it reflects the moment it was taken. So how do we ensure that the results of our surveys—and the insights we gain from them—are an accurate representation of the community?

 

The answer is multiyear studies. There are many reasons why we continue to survey on the same topic, even using the same questions. Let’s walk through some of the benefits of multiyear studies, using Wiley’s Society Membership Survey as an example.

 

Compare Changes in Opinion

Opinions, needs, and values evolve over time. Multiyear surveys allow us to track changes and map trends that we can use to evaluate and at times even predict trajectories by extrapolating data.

 

For example, in 2016 we completed the second annual iteration of our Society Membership Survey. With only two years of data we can’t see trends in the way we’ll be able to in 3 years or 5 years, but we can still start to see some patterns. In both surveys, we asked nonmembers why they didn’t participate in a society or association.

 

im1.png

 

There’s a lot of change here, in part because we didn’t include lapsed membership as an option. But looking deeper, we can see a huge decrease in the percentage of respondents who selected “I’ve never been invited to join.” How will these changes continue to track in future years? Only time will tell.

 

Improve the Phrasing of your Questions

Having a second chance at deploying a survey also lets you continuously improve your questions. During analysis it might become clear that your original question was confusing or that it doesn’t yield answers to what you really meant to ask.

 

Repeating a survey lets you refine your questions. Of course, this does affect your ability to compare identically phrased questions year on year, but that can be worth it to achieve clarity.

 

There were a couple of questions in our Society Membership Survey that got tweaked in the second year. In our first survey, we asked recipients for their primary reasons for renewing. They were given a few options, such as “I feel connected to the community,” and “I have never thought about it.” The next year, we expanded the question. Instead of asking about the primary reason they renew, we asked them to think back to the last time they decided to renew.

 

im2.png

 

We also gave recipients more options to choose from. This question represented an opportunity to gain more information, and multiyear research allows us to do that.

 

Add New Questions

Sometimes your results might spark more questions that either didn’t occur to your team the last time around, or questions that you decided not to ask.

 

Multiyear studies are your chance to voice those unasked questions. In our second year of the Membership Survey, we realized that we were missing the voice of the lapsed member. We knew a certain percentage of those participating in the survey were former members, but we hadn’t asked them why they stopped renewing.

 

im3.png

 

This information is very valuable to us, and to societies thinking about renewals. Knowing both the why and the why not gives us a more complete picture.

 

Capture Data from New People in the Community

The research community—or really any community—is constantly growing as new individuals start their careers. Reaching early career professionals is critical to the continued success of any industry, and we need to ensure that we’re capturing the most up-to-date information about this group’s needs.

 

One of the most exciting aspects of the Wiley Society Membership Survey is that it captures the opinions and values of those with only a few years of experience and those who are students. It can be easy to assume that a community is knowledgeable about the professional offerings available, but that’s not always the case.

 

im4.png

 

In both iterations of the survey, we captured information from those with less than a year’s experience in their fields.

 

Any researcher knows that you can never have too much data. Multiyear studies let us build up an ever-growing dataset. Surveys are an incredibly valuable market research tool for any organization. But, survey fatigue is real. It’s important to make sure that your recipients know how you’re using their thoughts. Make your questions count, and make sure your recipients know that you value their continued participation in multiyear market research.

 

For more resources on the Wiley Society Membership Survey, click here.

 

Image Credit: Samantha Green

 

    Samantha Green
Samantha Green
Society Marketing, Wiley

The research community is global. But do members in different regions want different things? Societies need to be able to offer valuable benefits to potential members everywhere in order to succeed. This infographic explores some of the different motivations for joining, renewing, or leaving a society from around the world.

 

Member Engagement Around the World (2).jpg

 

For additional resources on member engagement, download a free whitepaper here.

 

 

    Anna Ehler
Anna Ehler
Society Marketing

Knowing where and how to invest in new research areas can be challenging. We recently sat down with Duane Williams, PhD, Vice President at ÜberResearch, to learn more about the database and software that they offer to help publishers and funders make strategic investment decisions in the research community.

 

duane_williams (2).jpg


Q: ÜberResearch offers a database and software to help inform investments in research. How does it work?

 

A: Traditionally when data is used to inform strategic research investment decisions, the focus is primarily on articles. That’s useful because publications are the currency of research. But publications present the research findings from a particular area within a researcher’s broader aims, and they typically represent work that was completed several months to a year in the past.

 

Our company is taking a different approach.  By focusing on the inputs, the research grants, we can provide a view into the researcher’s broader interests.  Grants also present an earlier indication of where the science is going, and we believe that’s very useful to inform strategic research decisions.

 

Q: But research grants lead to papers, right? Why aren’t published articles good indicators for where the research is?

 

A: Well, it is useful, but does not present the full picture. For example, consider the time lag between a grant and a publication.  It’s roughly on the order of a government administration. So if you solely use publications to inform where the research is going or how you might make a strategic investment, where you might create a new journal, those sorts of things, one thing you may be missing is the researcher’s response to a change in policy.

 

Q: Ok, you’ve convinced me! Could you share some examples of how the database can be used?

 

A: Some of the main applications include finding where the funding gaps are in research. Where can a small foundation put their investment to make a significant impact, and most importantly, avoid duplication of research efforts?  A small foundation doesn’t want to fund the same thing that the National Institutes of Health is funding, but a large organization like the NIH also needs to pay attention to where the different institutes and centers within it are investing.  So it allows for better coordination of investments and research if you have a strong sense of where the different stakeholders are putting their time and effort.

 

We also focus a lot on transparency and reproducibility of results. In this case, it is not just the reproducibility of specific scientific studies, but findings from assessments of the impact of different programs or the need for new initiatives based on the research landscape. Because that’s an important factor influencing how research investments are going to be made.  We can also start to identify where there are emerging areas of research. Maybe there’s a need to launch a new initiative, if you are a funder, or create a new journal, if you are a publisher.

 

Another very common use for the software is to identify experts in the field based on their research activity, publications, grants, patents, etc. This can be for a host of different reasons including reviewers for grant applications and manuscripts.

 

All of these kinds of decisions can be greatly informed through the use of our databases and software.

 

Q: So, say I’m a society looking to launch a new journal. How would I actually use the database to decide where my investment will have the most impact?

 

A: One method would be to enter a subject search term into the database, like on Google or any other search engine. Say you enter “Sickle Cell disease.” You’ll get the results in a ranked order with a score that tells you how relevant each of the results is to your query. You’d be able to see the grants for Sickle Cell, but also the publications, patents, clinical trials, and a lot of other bits of information that inform what’s happening in the field. You can drill down in that data to look at the translational research pathway from basic to clinical research, and you can also see all of the international funding. It really gives you the full landscape of how much is being invested in that area. It’s important to note that the query does not have to be as general as a disease area, and we regularly work with clients to create very custom queries. For example, a query to assess the volume of research focused on a specific gene or target.

 

Q: ÜberResearch is a fairly young company. Do you find that people are using the database?

 

A: We are a small company, we started just three years ago, but already we’ve worked with many of the large agencies in this space.  Which I think for me validates the usefulness of the data that we are collecting and the approach that we are taking.

 

Q: I know that Wiley uses the ÜberResearch database to help our society partners with strategic portfolio development. If you don’t partner with Wiley, how can you get access to your offerings?

 

A: ÜberResearch offers a range of different solutions and not just a single software package, so there are several ways to engage. Most clients work with us by purchasing subscriptions to our software. That typically comes with training and support to inform the best strategies that fit within their current workflow.

 

We also offer a range of functionality to clean, categorize and enhance data, as well as APIs to deliver our content. Some clients also leverage custom implementations and custom software development.

 

For some of our larger clients, we also offer service contracts which take advantage of our infrastructure, and we also support them in actually carrying out the analyses.

 

Q: It sounds like you’re working on some truly innovative things. Thanks again for speaking with us!

 

A: My pleasure, thanks for having me.

 

For more information about the tools we discussed, visit www.uberresearch.com

 

 

Image credit: Duane Williams

 

    Justin L. Matthews
Justin L. Matthews
Justin L. Matthews Assistant Professor

AGU fall meeting.jpg

Congratulations! Your submission was accepted to a conference and you’re giving a poster presentation. Are you nervous? I’m here to tell you that designing a great poster is not difficult, but you should start the design process sooner rather than later. No doubt you’ve invested great energy in exploring the literature, crafting your research question, designing your study, and collecting and interpreting your data. Now you’re ready to share your findings. I want to encourage you to take this final step as seriously as you approached the initial ones. Successfully and effectively sharing your work is critical. Your work is meant to leave the lab-without dissemination, your research never really happened. So, how does one design an effective conference poster? Here are five things to remember when crafting an effective conference poster presentation.

 

1. Tell a coherent story

 

Think about the last poster session you attended. What did you enjoy? Was it the variety of the work being showcased? Was it the lively conversations between you and the presenters? What did you dislike? Were some of the presentations disorganized, difficult to read, understand, or follow? Think about how your audience will experience your work. Often, you only have about five minutes (or less) to relay your findings to someone. It is easier to do this when you have a well-organized, cohesive, and engaging story to tell. Start with a descriptive-but-specific title. People often decide to talk to you based on a combination of your title and your poster’s visual appearance; take care in crafting both. Once you have someone’s attention, invite them in to hear a summary of the work and tell them an organized and engaging story. First, share a little about what inspired your work. Then, relay your project’s main objective or research question. Next, tell them what you did to answer your question. Finally, tell them what you learned and why your results are important. Remember. you have only a few minutes to summarize months or years worth of work, so stay focused, organized, and on message.

 

 

2. Go beyond words

 

Effective posters use visual imagery to convey information. Remember, this thing you’re creating is a poster--a giant piece of paper that can include elements other than words. The visual appearance, in addition to your title, is what people will use to determine their interest. For the sake of your audience, use photos, figures, timelines, and graphs to tell your story. Most observers do not want to spend the majority of their time reading; they want to listen to you while looking at nice visual imagery. Allow the visually interesting parts of your poster to take center stage. Encourage your audience to gain insight into your work through imagery while you narrate their experience. Remember, for the most part, you will be right there next to your poster to guide your observer's experience.

 

 

3. Cater to your audience

 

Effectively communicating ideas is easier when you know your audience. Find out who will be attending your session. Will they be colleagues in your field, individuals from other fields, or something in between, maybe even (gasp) members of the public? Is your venue interdisciplinary or field-specific (e.g. the American Association for the Advancement of Science, the International Conference on Sea Otter Grooming Behavior, or a small department colloquium)? People outside your field might need help seeing the bigger picture or understanding the value of your findings. Think about how your audience will experience your message, understand content, and relate. If your audience consists primarily of people from your immediate field, you might be able to get away with certain acronyms or skimp on theoretical or methodological detail. However, if your audience includes people who are not familiar with your terminology, theories, or method, you need to customize your message accordingly. The last thing you want is an observer who is lost or frustrated.

 

 

4. Get to the point

 

Have you ever been in the middle of a conversation and you just wanted it to be over? Chances are, you have. Remember this when designing your poster. Don’t frustrate your audience with an overly lengthy presentation (tip: keep your poster word count below 600). Large conference sessions can include hundreds of posters and the people you’re talking to probably want to see a number of them. Prepare to effectively relay the details of your work in about five minutes. This gives you enough time to lay the groundwork for your research, talk about what you did, discuss what you found, and close with why your results are important. If someone wants to spend an extended amount of time at your poster, that’s great! However, allow them to make that decision individually.

 

 

5. Show that you care

 

When you present at a conference session, your presentation is a direct representation of you, your co-authors, your lab, and your institution. Remember this when crafting your presentation. You want your poster to be impressive. In addition to being a forum for discussing research, conference sessions are social gatherings where professional connections are often made. Remember, you may meet your future graduate school advisor, future post-doc advisor, or future collaborators/co-workers. To this end, create a poster that shows you care about quality. Remember, you and your behavior during the poster session are included in this category. Be respectful and kind. Take pride in your presentation and your work. Last, but definitely not least, please remember to shower.

 

 

Now that you have a handle on these five broad poster design guidelines, visit my website for a more in depth look at the poster design process, including basic tips and tricks (e.g. fonts, colors, elements, sections, etc.) as well as ready-made poster templates to make the first step in designing your poster that much easier. Good luck!

 

 

Image credit: American Geophysical Union

 

    Morgan Kubelka
Morgan Kubelka
Library Services, Wiley

With the increased emphasis on demonstrating measurable outcomes, academic libraries are under pressure to assess their impact on student learning and success. A look at the growing collection of industry research underscores this trend that appears to show no signs of slowing.

 

7 Indicators w Link to WP form Final 11.15.jpeg

 

 

To learn more about how librarians are adapting to the culture of assessment, download the complete white paper.

 

    Julian L. Wong
Julian L. Wong
Managing Editor, Molecular Reproduction & Development

doctors.jpgPeer reviews can be a game-changer for both the authors and readers of a future publication, affecting the story that the data tell as well as its impact on the field. The first of this two-part series focused on considerations for delegators of peer review. In part two, the attention is turned to the recipients of a delegated peer review.

 

Part 2: Considerations for the recipient of a delegated peer review

 

Before agreeing to take part in a manuscript peer review at the request of a senior scientist, an individual should consider that peer reviewing is a learned art, and those who do it well devote a substantial amount of time to the process. Indeed, the process may be more difficult than writing one’s own manuscript since the evolution of the story presented is not second nature, so following the rationale that the authors have presented may be challenging. In contrast to published manuscripts, these unpublished works may be early drafts with rough prose that do not effectively express the author’s ideas.

 

The following are questions for self-evaluation to determine if an individual will be able to complete the most constructive evaluation possible.

 

• Is the peer review process a collective effort in collaboration with the invited principal investigator? As noted in Part 1, this arrangement can be a very useful exercise for individuals who want to pursue careers that involve some degree of publishing (academic, clinical, or industrial). As is true for the entire peer review process, a collective evaluation is more constructive than one individual’s perspective / opinion. When possible, a consensus peer-review from a single lab is the most effective, but could require more time to coordinate.


• How much time is allotted versus will be required to complete the review? Life happens. Experiments can fail. But the review process does not stop, and authors and editors are pressed for time. In my experience, peer reviews take longer than expected to achieve a level of criticism that truly benefits all parties involved. And don’t forget to include at least a 1-day break between the initial review write up and its editing and submission to the editor. This forced break allows one to see the manuscript as a whole rather than focusing on otherwise minor blemishes, and allows one to reflect on whether or not the manuscript and its data contribute substantially to the field


• Does the topic of the unpublished manuscript engage an individual’s expertise, either past or present? The degree of overlapping interest that a reviewer has to the topic of the manuscript will invariably make the review process more productive – after all, the assumption by editor and authors is that the peer reviewer will be an expert, not a student completely new to the specific field covered in the manuscript. Conversely, readings from those tangential to the work presented in the manuscript (e.g. those who follow the field closely, but are not “insiders”) are invaluable for providing new perspectives and identifying inconsistencies or assumptions that have become established without empirical evidence.


• Are the majority of methods used in the manuscript familiar? A thorough understanding of how the data in a manuscript were acquired is essential for determining if the authors’ interpretation of the results has considered the nuances of the experimental paradigm. Theoretical and working knowledge of techniques can be very different perspectives; having both is ideal.

 

A peer review should ultimately help an editor identify strengths and weaknesses in a manuscript, thus informing – not voting on  – a decision. If one has any doubt about the relevance of the process, simply look back to the 9th century guide to physicians, Adab aț-Ṭabīb where the concept of the peer review was first documented as a check-and-balance system to protect the sacrosanct oath of physicians.  In it, the author recommended that peer review be used judiciously to help a medical counsel decide if a doctor had acted in the best interest of his patient. Thus, performing a peer review today is a rare opportunity that can influence the direction of a field, and is ultimately one path to scientific altruism.

 

Image Credit: Sean Locke/iStockphoto

    Helen Eassom
Helen Eassom
Author Marketing, Wiley

Putting together the index for your book is a key part of manuscript preparation; however, it can often be tricky and time-consuming. What’s the best approach to take? What should you index, and what can you leave out? We’ve put together this helpful infographic which offers 8 tips on compiling your index. For further information on indexing, and other aspects of manuscript preparation, take a look at our Book Authors resources here

 

Indexing infographic.jpg

    Morgan Kubelka
Morgan Kubelka
Library Services, Wiley

Out with the old, in with the new.

 

The rise of embedded librarianship on academic campuses illustrates the departure from one-shot library instruction classes, which have historically limited the librarian’s ability to impact student outcomes in a meaningful way.

 

Strong partnerships between faculty and librarians are at the heart of the embedded librarianship approach, allowing librarians to transcend the “consultant” role with sustained involvement that integrates librarians into the planning and preparation of course work. These collaborative efforts not only build stronger connections between faculty and librarians, but they also improve the student learning experience and increase student engagement.

 

After a yearlong “experiment” with embedded librarianship, Multimedia Instructional Librarian Alison Valk of Georgia Institute of Technology and Assistant Professor of English Kathleen Hanggi of Doane College reflect on the factors that contributed to their successful partnership.

 

This Best Practices for a Successful Embedded Librarianship Partnership infographic below illustrates the recommendations that emerged from Valk and Hanggi’s collaboration.

 

Embedded Librarianship infographic.jpg

 

Download the complete case study featured in the white paper, Adapting Academic Libraries to the Culture of Assessment.

 

Alison Valk is the Multimedia Instruction Librarian for the Georgia Tech Library.  Valk received her Master’s in Library & Information Science from Florida State University and a BBA from Georgia State University in Computer Information Systems. She has been researching the benefits of embedded librarians in college-level courses and was recently published by the Association of College and Research Libraries spotlighting her efforts in this area.

 

Kathleen Hanggi is an Assistant Professor of English at Doane College in Crete, Nebraska. She earned her PhD in English from Emory University and held a Brittain Postdoctoral Fellowship at Georgia Tech from 2011-2013. She can be reached at kathleen.hanggi@doane.edu

    Julian L. Wong
Julian L. Wong
Managing Editor, Molecular Reproduction & Development

Review with Apple.jpgPeer reviews can be a game-changer for both the authors and readers of a future publication, affecting the story that the data tell as well as its impact on the field.  Indeed, the most constructive reviews are not simply “reviews” of the work at hand; they provide feedback for all parties involved in a manuscript’s life. But who conducts these reviews? How are these individuals chosen? And what is expected of them?  In this two-part post, we consider the delegation process from two vantage points.

 

Part 1: Considerations for the delegator


The breadth of scientific peers in a field is often overwhelming for an editor, who may only be tangentially familiar with the topic a manuscript covers. The choice of representative expert peers in the field is therefore critical for making an informed decision about the worthiness of a manuscript. Most editors consider the list of author-suggested peers, and balance these scientists with those members of the journal’s associate editor board as well as other published experts in the field. Unfortunately, this focused approach places undue burden on a select number of successful principal investigators – who may already be over committed (in which case, they should decline the invitation and, instead defer to other appropriate, suggested peers whom the editor can select from).

 

For those investigators feeling compelled to review for the journal or editor, one option is to delegate the review to scientists in their lab (with the consent of the editor, of course). In the ideal scenario, the manuscript is reviewed collectively by the invitee as well as by other scientists. This unified approach – if conducted in a confidential, forum-style meeting where open discussion among the participants may take place – also provides a great learning experience for those still in training, and will likely result in a more critical, neutral evaluation of the work at hand, with less extremely negative or nitpicky comments (see also Walbot 2009).

 

Before delegating the peer review of a manuscript, however, the invited expert should considering the following:

 

  • Is the science behind the manuscript in line with the expertise of the scientists being asked to spend hours of time evaluating? We assume that editors have filtered through the details of a manuscript, but simple things may have been overlooked – such as the appropriateness of the topic to the principal investigator’s expertise. For example, the proliferation of similar or identical gene symbols among different organisms could incorrectly identify an “expert” in the wrong field. In this instance, the invited expert should contact the editor to confirm the appropriateness of the decision.
  • Is the individual within the lab scientifically mature enough to conduct the evaluation? Experts are requested to evaluate work based on overlapping interests, but this might infringe upon the ethics of the process as a whole since ideas are easily (and sometimes subconsciously) plagiarized. First, has the individual experienced the peer review process first hand as an author? If not, this might be a good opportunity to do so… with the caveat that the scientist is also able to distinguish the demands of critically reading a published article for content and ideas that bolster their own developing work from evaluating an unpublished manuscript for scientific rigor within the context of the published literature. If these basic criteria are not met, then a solo execution of the peer review by the person delegated to may not be appropriate; rather, the optimal role for that scientist may be to provide an additional perspective for a collective review headed by the invited principal investigator would be more appropriate.
  • Realistically, will the individual in the lab be performing the entire review? Principal investigators are time-constrained. Does the due date of the peer review conflict with a grant or teaching deadline, thereby precluding the principal investigator from properly overseeing the review process? If the intent of the invited principal investigator is to copy-paste the delegated review into the appropriate field, then the invited principal investigator really should contact the editor and recommend this other person as an alternate expert (i.e. decline the invitation, and instead defer to the identified individual in the lab rather than outright plagiarize). This respectable path forward enlarges the pool of expert reviewers, provides an essential step in the training process, and encourages the individual to stand behind their own opinion and perspective. This deferral also upholds an honest relationship between the principal investigator and the inviting the editor - and, indirectly, the authors.

 

Image credit: Anne Hoychuk/Shutterstock

Filter Blog

By date:
By tag: