Some implications of "digital" for scholarly writing and publishing

The Role of the library in evaluating tools for scholarly workflow

Reading Time: 4 minutes
The Famous 101 Innovations in Scholarly Communications graphic from 2015

A workshop at the UKSG Annual Conference 2024 revealed, I believe, a vital role for the institutional library.

I joined a fascinating workshop run by the energetic and motivating Judith Carr and Rachel Bury of Edge Hill University (and my thanks to them for making this post possible). Carr and Bury used as their inspiration the famous 2015 project examining Innovations in Scholarly Communications, by Kramer and Bosman, which listed 101 tools to aid the scholarly workflow, divided into six “phases” (in some versions of their work, expanded to seven, but let’s stick with the six as shown in the famous infographic):

  1. Discovery
  2. Analysis
  3. Writing
  4. Publication
  5. Outreach
  6. Assessment

Examples of tools include well-known reference managers Zotero, EndNote, and RefWorks, which Kramer and Bosman placed under “assessment” (although of course tools could be used in more than one phase). In subsequent years, the Scholarly Innovations project identified over 400 such tools (and no doubt there are many more today). This proliferation raises the question: how is any researcher even to know about, let alone evaluate, what is available?  The goal of the Kramer and Bosman project was to establish “to what extent researchers are using these and more traditional tools”, and  at the same time “to help decision-making by stakeholders supporting researchers”. Appraisal criteria they used included tagging each tool by three criteria: “open”, “efficient”, and “good” (terms I believe that were subjectively assigned).

Fast forward to 2024: The goal of this much-needed workshop was not so much to determine which tools exist today, but to see how the participants (mainly library staff) appraised some of the available offerings. The 30 or so participants were divided into groups and given two tasks: firstly, to identify the criteria for evaluating innovative tools. Secondly, the groups (now reshuffled) were asked to look at five software products or services and provide their evaluation, using the criteria identified earlier.

In the first session, the groups identified over 25 evaluation criteria for scholarly tools. Here are a few of them:

  • Openness (vs proprietary)
  • Reliability
  • Standards-based
  • Interoperability with other tools
  • Time required to get up to speed

Many of the criteria they came up with were to be expected, but even though one group asked “is it better than what I have already”, they didn’t explicitly mention any kind of metrics, which was surprising – after all, within an institution, the most likely centre of bibliometric expertise will be the library, and the whole subject of bibliometrics was created to provide some evidence-based assessment of scholarly practice.

In the second session, groups were given five tools and asked to evaluate them. My group, for example, looked at tools for outreach, and was asked to look at Altmetric, Kudos, ORCID, LinkedIn, and Google Scholar Profiles.

Did my group find the “right” answer? I was fascinated to see how the problem was addressed by the group. Individual participants offered anecdotal evidence from their knowledge of the product, good or bad, and then one member of the group reported the findings back to the main meeting. Of course, in the time available, it was not possible to do a more systematic assessment, but (for what it’s worth) my group recommended that researchers use for outreach ORCID, LinkedIn, and Google Scholar Profiles.

Strengths and weaknesses of the process

Here is my subjective take on the workshop, not the conclusion of the group as a whole:

  • The anecdotal comments by participants were not necessarily reliable. One group member stated “you have to pay to use LinkedIn”, which is not the case. One critical comment on a product was often enough to move the discussion to the next tool.
  • The group did not distinguish between must-have tools, such as ORCID, and nice-to-have tools, such as a LinkedIn profile. Perhaps this is one of the inevitable limitations of dividing up the scholarly workflow into six areas, since ORCID is about far more than outreach.

My conclusions

Evaluation of software is a notoriously subjective and contentious area. The library is of course mindful of the need not to be prescriptive – if researchers choose to use tool X, that is their decision. If tools are widely used by researchers, the library can (and does) assist by providing guidance on the most effective use of these tools (for example, crib sheets on using Zotero (examples from Imperial College and Illinois State University, to take just two from many).

But perhaps even more important, the library also has a role in specifying and communicating essential practice. For example, in the UK, if you are an academic at a higher education institution, you must comply with the Research Excellence Framework (REF) for your institution by depositing your research papers in the institutional repository, in order to gain funding: you have no choice. The library role here is to guide researchers into how best to manage that compliance. I would argue that a tool such as ORCID represents a similar three-line whip for researchers, even if it is possible (albeit unlikely) that the researcher could publish an article without including their ORCID ID. While there are plenty of crib sheets for using ORCID, I don’t think the group communicated fully the vital role of the library in identifying essential (as opposed to best) practice. While ORCID has some defects (it’s alarmingly simple to create a new ORCID profile, and one participant stated, “we have five John Smiths at our institution, and we don’t know which ones are ours on ORCID”), examples of ORCID not being used properly are not a justification for not using ORCID. An ORCID profile is today essential for any researcher at any institution.

All credit to Carr and Bury for such a thought-provoking workshop. In the context of debates (for example, at the recent Researcher to Reader conference) about whether libraries should be abolished, here was one role (among many) that a library could carry out: to identify and to stress the importance of persistent identifiers such as ORCID. They are the vital components of the scholarly enterprise, but it is all too easy for them to get lost in the swirl of new products being pitched to academia every day.

Previous

How not to index the Pevsner Architectural Guides

Next

Should scholarly societies outsource their publishing?

2 Comments

  1. Aaron Tay

    I think part of the issue is it’s hard to be objective because a lot of times what is good depends on your own specific context. A obvious example is use of social media. If all your peers are on LinkedIn, it’s the right place to be.

    But I do agree, librarians do need to do their own objective evaluations eg on AI discovery tools beyond just trusting what vendors tell them. And some of us do.

    • Michael Upshall

      Absolutely, Aaron! I think the library role is two-fold: to support and encourage best practice with the tools that researchers use, but also to make clear that the use of PIDs such as ORCID is essential for good scholarly practice (and to make sure you aren’t confused with all the other researchers at your institution with the same name). My concern is that the researchers are not always as aware as they should be of what is essential compared to what is used by their colleagues.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress & Theme by Anders Norén