Reading Time: 4 minutes

This lengthy report (some 20,000 words) is quite a challenge to review. It was created out of interviews with some 49 major figures in the scholarly landscape, all of them from the anglophone Western scholarly region, a mixture of commercial publishers, service providers, librarians, university presses, and services providers of common tools for the scholarly infrastructure. Their summary of recommendations is 922 words long – too long. The recommendations themselves are 8,400 words. That’s a very leisurely set of recommendations (all 25 of them). If asked to summarise this report, it is that it provides a well-informed and insightful overview of the present-day scholarly landscape and its issues, but not much that is immediately useful in terms of recommendations. If you don’t manage to read as far as recommendation 25, I wouldn’t, in other words, have a sleepless night over it.

I agree there is a second digital transformation, where people realise the possibilities of digital. But digital transformation not only needs a shared infrastructure, but it also needs a whole new digital way of creating and evaluating scholarly content. The authors valiantly attempt to outline how this can take place, dividing recommendations into four categories:

  1. Identifier providers
  2. Enterprise publishing systems
  3. Discovery, collaboration and trust
  4. Preservation

Too their credit, the authors recognize that “Few [of the gaps] are primarily the results of technical challenges. Rather, they are the result of stubborn strategic, governance and business model impediments.” It’s a business issue, not a technical problem.

However, when we get to the recommendations, many of them are  anything but straightforward – and unlikely to be carried out. Many of them read like a template for further reports by consultants. For example, recommendation 6:

“We recommend that publishing organizations and research libraries conduct a user-centric study of how signals of trust and authority in research are actualized in the current discovery and access ecosystem (Recommendation 6).”

Who is going to do this? Scholarly publishing is made up of many organisations working mainly in a business environment, seeking to maximise profits and/or to minimize costs, even if they are not-for-profit. Few organisations (apart from a handful of funders) are interested in carrying out neutral, industry-wide user-centric studies.

Several of the recommendations are vague, for example 19: We recommend that publishing organizations and university leaders together examine the best long-term models for defining the boundaries of the scholarly record.”

There are a few infelicities in the discussion section, for example, in the section on consolidation and merger of publishers, they conclude, strangely: “Should this consolidation scenario play out, each remaining independent publisher could be expected to have an essentially separate technology stack that it develops itself.”. Why? The smaller the publisher, the less able they will be to run a separate technology stack. They are more likely to make use of a shared platform, whether commercial or open-source. The authors state as much elsewhere in the report: “small or not-for-profit publishing organizations feel the pressure to utilize cost-efficient out-of-the-box solutions”. I would see the situation rather differently: that many players adopt open-source systems as the most practical way of achieving some kind of independence from commercial providers. The rise of repository platform software such as Dspace, used by many institutions but drawing on a common code base, is an example. But for the most part, the landscape description is accurate; it’s just the conclusions and recommendations that disappoint.

For example, The report notes that Elsevier, Sage and Wiley have created software offerings in different sectors, some of which compete with Clarivate and Digital Science, but then doesn’t reach any great conclusion: “The result is that their interests, while sometimes aligned, in other cases are not. This has tremendous implications”  – but we aren’t told what these implications are.

The report gives a quick whistle-stop tour of  the major issues affecting scholarly publishing (such as research integrity, citation metrics and their faults, altmetrics), but again, the conclusions are sketchy. As for open access, “better infrastructure is needed to ensure open access is viable and sustainable”. That’s hardly much of a summary of the debate around open access: what does “better” mean in this context? There is a discussion of “atomization”, the separation of various components of an article, such as data and methodology, with multiple versions of the text in different repositories, and how those components need to be linked (although there is almost nothing about what data replicability entails), but no clear guidance about how the issue can be solved.

Interviewing key decision-makers in both commercial and institutional organizations will inevitably produce conflicting views. The main conflict, stated rather euphemistically, is “in our interviews, a real tension emerged between those wh0 believe that publishing should primarily be an open system that maximises inclusive participation and those who believe that publishing should primarily be responsible for securing the boundaries of the scholarly record to ensure it is validated and trustworthy”. Is that the fundamental distinction? No mention of making money, in either case?

Recommendations

When it comes to the recommendations, they are simply too many, and too varied, to be memorable. Many of the recommendations are “to create a study”, and many of them require collaboration across the various players in the market. Whether these are commercial rivals or not-for-profit services in adjacent areas, collaboration is unlikely. Recommendation 12, for example proposes that “publishing organizations, funders, senior research officers, and perhaps other stakeholders” all work together to design a spine model for the scholarly record. Is this likely? More helpful might be to assess how industry collaboration came about in the past, for better or worse.

Some of the recommendations are so obvious as to be hardly worth stating, for example, 17: “We recommend continued investment in automated editorial tools that can support the detection of fraud and misconduct within manuscripts.” That investment is already happening.

Overall, I feel this a report that tries to cover too much and ends up providing little concrete guidance for the future. A smaller number of recommendations, with a suggested path to achieve them, might be much more successful.