Some articles are not really about AI at all; this is one of them. David Beer, Professor of Sociology at the University of York, has written a post with the above title (actually a question in the original). But of course, such a title is self-evident: of course we should. Professor Beer’s gripe is about a summarization tool that uses AI to identify the most significant phrases in an article, and then creates a summary. The tool, Paper Digest, was created by two Japanese academics, and is an impressive piece of work – although it doesn’t work very well, but Professor Beer doesn’t notice this (or doesn’t mention it). “It reduces the article down to some key points, making it rapidly digestible”.
But he still isn’t happy. “The concern I have here is that the move to AI-informed efficiency might actually end up cutting out the bits of research that produce all sorts of unexpected and unforeseen outcomes … What if the AI summary leaves out the little nuggets that open up new directions?”
This is nothing to do with AI; it’s about a type of reading. As Francis Bacon said, “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested”. You don’t apply the same kind of reading for all texts. If you are a researcher confronted by 100 new articles on your subject published in the last month, then clearly there is no way you can identify “the little nuggets that open up new directions”. You just want something to tell you which (say) ten of those hundred articles are worth reading in detail. The AI tool enables that skim reading. It may be as simple as identifying some core concepts in those hundred articles, or a reference to a key paper, or even an illustration, but as a researcher you have to find some way to skim the contents.
What Professor Beer is doing is a common criticism levelled at AI. Because it doesn’t replace detailed reading, he claims, it is therefore less than perfect. Instead, it would be better to look at AI tools in terms of “fit for purpose”. A machine can skim-read 100 papers much faster than I can, and if I trust it, I will use it. A professor, of all people, would recognize the kind of “reading” the AI provides. Academic researchers often describe their methods of skimming a paper, and there are plenty of articles describing ways in which it can be done, such as here and here. I’m sure they all miss “the little nuggets” just like the AI tool.
Looking in detail at the summary provided by Paper Digest, I found rather variable results – nowhere near as readable as the example provided by Professor Beer. I tried an article about a clinical trial that compared low-dose aspirin with high-dose aspirin, and the tool gave me this very cryptic statement for “what this paper is about”:
Given the high mortality of intracranial bleeding in intracranial endovascular treatment, a low-dose aspirin strategy is subsequently adopted when using low-dose aspirin is no harm compared with high-dose aspirin in prevention of periprocedural complications and major vascular events in duration of dual antiplatelet therapy.
That’s hardly an acceptable English sentence, but to be fair to the software, it is exactly what the authors wrote in the paper. So here the problem is with the original paper, not the summary.
I certainly cannot complain about a tool that summarises content for me. I will then decide for myself when to search for the little nuggets that might spark my own research.