Reading Time: 4 minutes

This survey, produced by the Enago Academy, is a thorough and detailed look at the use of AI in academic publishing (the completed survey, “Role and Impact of AI on the Future of Academic Publishing”, is available for download here). While AI tools are becoming more widespread in academic publishing, they are by no means universally adopted, or even fully appreciated, and it is valuable to get some idea from users of what they actually do with AI tools, as well as what they think about them. How tools are perceived by users is often very different to the software creators’ idea.

The survey comprised 350 respondents from 54 countries, which means there is a good range of viewpoints. The bulk of respondents were millennials (born 1981-1996) and Generation Z (born 1997-2012):  millennials are early- and mid- career researchers, while Generation Z are today’s undergraduates. In principle, these age ranges comprise the most technology-aware segments of the population; they should be the most familiar with, and the most excited by, AI tools. But was this actually the case?

What does the survey tell us?

The summary states there is one overwhelming driver for the adoption of AI: that the number of articles is increasing. Although I agree with much of the survey findings, I don’t think the increase in scholarly output drives the adoption of AI – at least, not for the researchers themselves. Discovery has always been a problem, because even ten years ago, there were more scholarly articles in science than researchers could deal with manually.

As for using AI in the submission process, publishers may be looking to make their submission systems more efficient, but researchers are not, I believe, “in need of effective tools to ease their writing efforts” – at least, I don’t think they are aware of such a need. I’ve spoken to many researchers, and none of them has stated “I wish I had a tool that could automate some of this…”.

Nonetheless, researchers are aware of AI tools in academia, and they most common tools they mention language improvement (Q2), plagiarism detection (Q3), and image recognition (Q3). Although there are many other AI-based tools available, they are ranked considerably lower by researchers: for example, only 16% stated they were aware of AI tools for content discovery and for translation (Q3). AI is certainly being used for both of these tasks, but I believe for both of these tasks, the shift to AI has been largely imperceptible to end users: somewhat similar to the Google search interface, which, behind the scenes, has taken on more and more AI-based tools to convert the string of characters that a user keys to a semantic search. In these areas, researchers are using AI without being aware of it.

When asked which tools they were most familiar with, the respondents chose the Elsevier Journal Finder in first place – a good example of a simple AI-based tool that in principle assists researchers to find the most relevant journal for their article. Yet, according to question 7, the most important benefit in implementing AI for academic publishing is helping to automate repetitive tasks. AI can do much more than this, but it is perhaps a good starting point.

Who is this survey for?

Perhaps this raises the question of who the survey was for. Was it for researchers or for publishers? Some of the questions seem more suited to one group than to the other. For example, Q7 was about the benefits of implementing AI in research and publishing. One of the suggested options was “enhances customer experience”. What sense would that make to an individual researcher? Researchers are not their own customers.

Question 8 tackles the perceived limitations of AI tools, a very important topic. What concerns do academics have? There is a clear first answer here: 67% of respondents were worried about bias. Unfortunately, the question is not worded very clearly here: it suggests that one limitation of AI is that the tool may only make suggestions or take decisions based on available data. But, of course, AI tools can only be based on available data; humans, by contrast, can “think outside the box”. The real problem is that the available corpus data will inevitably be transmitted to the results, so the results will be biased also. There has never been an AI tool that can make use of unavailable data. The problem of bias is frequently (although not always) an issue with the underlying corpus. An analysis of all the authors on PubMed shows a clear majority of male authors compared with female authors, for example.

Question 9 also has a clear response. The primary obstacle for implementing AI was seen to be a lack of understanding of AI. Presumably this does not mean a lack of understanding by the developers, but by the users. Here, I think, is very clear evidence of a need for more training and awareness about AI tools.

Lack of understanding of AI is also revealed in Q13, where the biggest concern from respondents about AI is “machines becoming more intelligent than humans”. Most, if not all, the AI tools currently in use in academic publishing are induction-based utilities based on a specific, and very limited, training set. The idea that any “knowledge” gained from these tools could be aggregated to create a general intelligence is simply not the case. Nonetheless, 77% of respondents to Q14 feel “concerned” or “unsure” about the rise of AI in academic publishing. And 75% of respondents stated their department needed expert advice to use AI for publications (Q16). The survey certainly doesn’t suggest widespread contentment with AI.

Next steps

What does this tell us? Users are not aware of many of the current uses of AI in publishing. Users are concerned about the adoption of AI and feel they need help in using AI tools. In other words, we need not only the AI tools but also effective evangelism, user interaction, and feedback, to identify how the tools are perceived and used, before we can implement AI tools effectively.

In conclusion, a fascinating survey, clearly laid out, and containing some clear interpretations of the data collected. I look forward to next year’s survey.