The academic research journey must be one of the most studied aspects of higher education. One of the most impressive visually was the infographic by Boesman and Kramer, showing the academic research journey in six roughly equal stages – if only to indicate the proliferation of digital products and services trying to assis (or to confuse) the process.

Nonetheless, perhaps because the academic user journey is considerably more elaborate than most studies suggest, the papers I have read remain, in my view, somewhat simplistic. What actually happens between a researcher having an idea and that idea reaching publication is considerably more nuanced than the scholarly literature might suggest.

For example, an article by two sociologists, McMahan and McFarland, which they helpfully summarise in the LSE Impact Blog, reveals two interesting points about the researcher journey. Firstly, they show that review articles – summaries of recent research in a domain – actually reduce the number of citations for articles they mention. Secondly, they point out that review articles create a narrative around research, which can often make individual papers more accessible.

Intuitively, I find both propositions reasonable. If you read about an innovation in a review article, you frequently no longer have to read the article. Is this a good or a bad thing? Toby Green, while head of publications at OECD, used to complain about the disparity between the number of times a new paper published by OECD was mentioned in the media, and the number of times the original paper was actually accessed. Most journalists who referred to the article had clearly not read the original, at least not on the site where the original was published. As undergraduates, we are told to check the sources for any opinion we pronounce, and yet all we learn is how to cite the sources without reading them. How many people of the thousands who cite famous statement from Adam Smith’s The Wealth of Nations have actually read the original? Life, as they say, is too short.

All this research reveals is that counting citations is a classic example of using a machine-based proxy to represent value. Now I think about it, the counting of citations is the kind of thing that a computer scientist must have invented: it is something that can easily be counted. It was introduced by Eugene Garfield in 1960, so it predates the widespread adoption of what can be described as the computing mindset.

Review papers distort the academic process in several ways. The authors of this paper fail to point out that review papers themselves are not peer reviewed. They are “peer-invited”, a dubious phrase, which means simply that the authors are academics invited by the editorial board of Annual Review journals to write a review article. The absence of peer review may mean that the authors of these reviews have unusual power. It is well known that review articles have many more citations than the original research articles on which they are based. After all, the research challenge, as one reviewer told me, is how to get the 50 or so papers published in his field each week down to the 10 or so that he has time to read. A review article dramatically simplifies that task.

The other finding from this article is that review articles construct a narrative: they don’t just say this or that article is good, but create a story around it. I find the narrative idea very powerful. When I read about the development of AI in the last 40 years, I found Pedro Domingo’s presentation of research in AI divided into five “tribes” AI in his book The Master Algorithm, which I wrote about in a post some years ago, very helpful. It did appear in a book, rather than a review article, but that simply reflected my less immediate contact with current areas of research. I didn’t agree with all of five tribes, or the divisions, but the image of a wheel with different schools enabled me to position individual thinkers in a fairly simple and intelligible framework. In other words, I find narratives helpful. Boesman and Kramer’s infographic, mentioned above, is a classic example of depicting a process in usefully intelligible stages.

While the authors claim the power to build a narrative can be dangerous, I think that far more dangerous is that these review articles are not themselves peer reviewed. If Professor Brown doesn’t like Professor Smith, then Brown’s review need not even mention Smith’s article – and who is to say something has been left out?

I also have a problem with the way this article lumps together research articles in many different domains, as if they all follow the same user journey. This is a common failing of studies of the academic user journey. Typically, the researcher looks at biomedical subjects (which usually are in more consistent XML formats, and hence easier to process) and the extends the argument to the humanities. The Annual Reviews  journals cover a huge range, from medicine to fluid mechanics to financial economics. My guess is that it is very unlikely that researchers in these different disciplines follow the same user journey when looking for relevant content. For example, the humanities and social sciences still have lots of original research published in book, rather than journal, format.

So, to conclude, long live review articles. Let’s make sure they are peer reviewed, and, while we are about it, why not combine them with “league tables” showing most cited – counting only original research, not literature reviews?