Reading Time: 4 minutes

I expected great things from this book, published in the Charleston Briefings series. Digital scholarship has been around for enough time to assess how things have changed since the era of print; Google Scholar has existed since 2004, which means over 15 years to assess and to analyse the digital workflow. Yet this book is a surprisingly diffident account of the digital scholarly workflow. It reveals which I can only describe as a deep-seated uneasiness on the part of the author, Steven Weiland. Apart from e-mail, Weiland seems reluctant to express certainty about the advantages of any digital tools. He doesn’t think much of Google Scholar, but that is just one of the digital tools about which he has mixed feelings about.

His theme, repeated throughout the book, is that different researchers have different styles of working. The book ends with a quote from one researcher: “I do what works for me.” I have no problem with emphasising differences in how researchers work, but Weiland’s writing style manages to emphasize the negative while appearing balanced. This is a common technique of journalistic writing, less so of academic writing. It works as follows:

  1. State an argument about which there is disagreement.
  2. Quote a researcher who states one point of view.
  3. Include a quote that would appear to confirm what you have just stated.

This way of writing gives the impression of being citation-based, and of course it is, although it does not constitute any definitive conclusion. Without any quantitative data, the result of the above discourse is to emphasise the final view, which in this book represents the author’s doubt about digital scholarly workflows.

We don’t seem to be able to agree on exactly what scholars do. David Crotty is quoted approvingly as an example of a “real” scientist who spends (or used to spend) all his time in the lab doing experiments:

It’s important to remember that the primary job of scientists is doing science, performing experiments, discovering new things. Most social tools for scientists are, by contrast, designed for communication, for talking about science. No matter how great such a tool is, using it is never going to be as important as doing their ‘real’ work. (ch 7).

There are multiple problems with this statement. Firstly, not all digital tools are social tools. AI-based tools for discovery are not in the least social. Secondly, even for geneticists like Crotty, not all research is lab work. The literature review is an essential part of scientific work – you only start in the lab when you are confident you are not repeating work that has already been done. If we extend the argument to the humanities and social sciences, you could argue that much academic work, such as Weiland’s book that we are reading, is not based on any field work at all; it is a commentary and description of the work of others and commenting on it. Does this make it less valid?

Weiland claims that the pressure on academics is higher today – but he seems not to acknowledge that tools have existed for over 100 years to facilitate faster reading. These tools predate digital scholarship, and hence seem to be exempt from criticism. The invention of the abstract, which dates back to the late 18th century in the minute-books of the Royal Society, and the rise of abstracting and indexing services in the 19th century, were designed to provide researchers with a way of getting the main point of an article before reading it in full. There is no claim in this book that the invention of the abstract was a retrograde step for scholarship. Yet, like skim reading, reading an abstract, on paper or digitally, only reveals part of the full meaning of an article.

Worse, it would appear that the library has no role in evaluating or endorsing digital tools. Weiland quotes approvingly the work of Bosman and Kramer, who look to encourage the take-up of digital tools, “but scholars “must know whether using a new tool will reduce time needed to get desired results or even get results that were hitherto impossible to get.” Weiland comments on this statement:

But making such judgments isn’t easy and the proliferation of tools from many sources and moving among them make interoperability essential. (p47)

Can the library not help by providing some evidence-based evaluation of these tools? It would appear not: “Antonijevic and Cahoy (2014) report that only about half (sometimes fewer) of the scholars they studied “felt the library should have a role in instructional support relative to the research workflow.” Instead, researchers claimed that adopting technology was the “responsibility” of the scholar” (p50).

Underlying all this appears to be the author’s suspicion that print-based methods were somehow better, a theme through the book:

  • “Many scholars are diffident about the constant stream of new apps”. (ch 7)
  • “Like others, Nicholas and Clark worry about the impact of easy and rapid digital search on reading for everyone whose work is with texts. (ch4)”
  • “Search and stockpiling go together”. (ch 4)
  • “The problem is the apparent willingness of researchers to accept problems of overload as a by-product of having so much at hand.” (ch 6, which seems to suggest to researchers it is their own fault if they cannot keep up with the literature).

And the usual suspects are quoted expressing doubts about the digital environment: Nicholas Carr, for example, the titles of whose books, The Shallows: What the Internet is doing to our Brains, which was followed by How Smartphones Highjack our Mind, give an idea of his views on the matter. And James Evans, whose 2008 paper, “Electronic Publishing and the Narrowing of Science”, was refuted by Lariviere et al (2009), a paper not cited here.

So, digital tools are not the benefit they appear to be; many researchers don’t like them; and in any case, libraries have no role in evaluating or endorsing them. Each researcher has an individual way of working, so we cannot, it would appear, make any recommendations. This isn’t much of a positive take-away for researchers looking to establish best practice for a digital scholarly workflow.

Personally, I think the task of an academic has not changed fundamentally with the advent of digital publishing. Then as now, the challenge has been to identify what to read; but there are better tools to facilitate that process in the digital era. Ideally, there should be some measured studies of whether these tools are useful or not, exactly at Bosman and Kramer suggested. After all, it is the library that has developed bibliometrics, providing reliable analytical tools with which to make informed decisions, and I think it should be the library that can provide these studies. Armed with some conclusions, we can then move forward and identify just what an effective digital scholarly workflow might be.