Reading Time: 3 minutes


Books on Shelves (Wikimedia)

An interesting report, based on a 2015 survey but published 2016, gives an overview into how one major university, Oxford, enables its users to find information about its own collections – not just researchers, but institutional staff (such as museum curators) as well.

The position at Oxford is complex – over 30 libraries, not including the College collections. Not surprisingly, one recommendation of the report, (the analysis, but not the information gathering, was managed by consultancy Athenaeum21) was to provide more signposts, both knowledge about the collections, as well as about which people to ask for information.

Clearly, the recommendations of the survey are applicable not just to Oxford. There might be more libraries at Oxford, but the problems of resource discovery are not unique to Oxford; all institutions will have similar issues (multiple collections, inconsistent or zero metadata, user demands for instant results).

Who was interviewed? The study combined interviews with users (undergraduates, postgraduates, faculty) and “providers” (librarians, curatorial staff, digital content managers). Interestingly, the survey also included interviews with vendors about solutions being provided (present and future) to meet user needs.

So, to the findings. For me, the most surprising result was not highlighted as a key conclusion of the survey, but simply mentioned in passing, with no data source stated: only 20% of students and faculty at the University of Oxford use SOLO, the Oxford Library’s own digital library catalogue. I’d love to know what they used instead, and why they shunned something that was implemented with so much work – but the statement is not investigated further.

While some of the conclusions are admirable, they don’t appear to lead to immediate obvious action. For example, one recommendation was “supporting researchers’ established practices”. That sounds great, but is more problematic than it seems. Oxford is a very wide-ranging university, and subjects studied range from Arabic to zoology, via forestry, medicine, Od English, and a host of others. Clearly, there is no single research journey that meets all those user needs.  In fact, the survey points out that resource discovery is very discipline-specific – for example, “ArXiv for Physics, PubMed for Medicine, WestLaw or similarly specialized tools for Law”. There could be as many user journeys as there are subjects – that would be a lot of software (or user guides) to produce.  

Another conclusion was that “students need to learn how to search”. That might the conclusion of the providers, but surely not of the users. Surely since the advent of Google, leaning how to search has been replaced by an approach that removes most, if not all, of the opportunity for using information skills by more traditional interfaces. I would have thought that battle had by now been won – there is an inexorable trend to creating the simplest interface and attempting (with varying degrees of success) to replace it by a screenful of results. Like it or not, students will expect to search in that way, and so teaching students to search seems not to be the answer. Make the interface more obvious would be a better way to go. And that applies to any institution, not just to Oxford.

To that end, providing better and more targeted metadata is one solution. Specifically, the report recommended “getting existing metadata out to the places where many researchers work” – they recommended exposing metadata for indexing by Google, which is certainly one method.  Unfortunately, you can only move metadata where it already exists – and even if it does exist, it is probably not sufficiently granular to be of great use to researchers. The growth of machine-learning tools to facilitate discovery are likely to have more effect here than simply moving poor-quality metadata from one place to another.

In conclusion, a useful and thorough survey, but not, I feel, likely to transform the academic user journey in a radical way.