Reading Time: 2 minutes

Now that artificial intelligence (AI) tools are widely used across academic publishing, how can we make informed assessments of these utilities? One problem is that new utilities appear almost weekly. It’s challenging for any academic to keep up with what is available, let alone trialling new solutions, many of which cover only one small area of the research process.

We need evaluations of AI tools within the context of how they are actually used – by checking with users. Information professionals are ideally placed to carry out these evaluations. This kind of evaluation does not require programming skills; anyone commissioning or managing AI utilities needs only a simple toolkit to understand what questions to ask when evaluating. Much IT development goes into creating an algorithm, but frequently it does not take into account bias in the training set, or an awareness of the context in which the tools are used: do these tools actually deliver a measurable benefit to the academic workflow? In an article published in UKSG Insights, I’ve suggested an outline of where AI is currently being used successfully (although researchers and publishers are often surprised to learn how widespread the use of AI tools is), and outlines a methodology for assessment of new tools. The recommendation is for a new kind of information literacy for AI, something that has been suggested in the literature but not, I think, as widely adopted as it should be. The goal is not to endorse or to discredit AI, but to enable us to make intelligent and informed appraisals, without the need to learn coding.