
Note: I found Duncan’s book so fascinating that I subsequently wrote a much longer post about the book and the issues it raises.
That question is not exactly what Dennis Duncan has answered in his new book (Index, a history of the: A Bookish Adventure, Penguin, 2021). His book is a fascinating overview of how indexing developed through history. Nonetheless, his book contains an implied answer on the question of manual versus automated indexes – and yet again, it’s humans 1, machines 0.
Duncan is a lecturer in English, and as you might expect from a humanities scholar, his conclusion is pretty clear. He provides two indexes for his book: one created by machine, and one done by hand. Of course, the hand-created index wins. A review in The Guardian praises the manual index and critiques the machine index:
The first was generated by computer software and, despite being heavily pruned by the author, its usefulness is limited … it misses out more complex phrases and compounds … some entries, such as “alas” and “age” are too broad to be useful… The “actual” index though, isn’t just more useful; it is a literary accomplishment in its own right.” [Keith Kahn-Harris, Guardian, 18 September 2021]
There are indeed some indexes that are literary accomplishments, but there are many more that are not. There is no mention of what indexing tool is used to create the index. Microsoft Word, for example, only automates the sorting of the index terms – what you index is up to you. In other words, it is a manual indexing tool that still requires you to think about where topics should be grouped. In the example below, I have selected the word “document” to be indexed:

My hunch is that academics pay little attention to indexes. The Guardian reviewer admits that he has “never paid much attention to the indexes in the scholarly books I have published”. Does that not reveal one of the problems of the index? The author is more concerned with the argument than with access. The only people an academic expects to be reading his or her book will be formal reviewers who read (they hope) from start to finish. But most books are not read like that. If we kept a measurement of the books we read, particularly for academic purposes, the number of books we read from start to finish is a tiny proportion of all the books we open. This means that, for the majority of the books we need, we require some kind of signposting to find the relevant part we seek.
Dennis Duncan himself argues powerfully for the inclusion of an index – and yet he delegated the creation of the indexing of his own book to someone else! Surely if an index is so important, it should be done by the author personally? The semi-manual method shown above does not take long, and enables the author to configure the index however they wish after the initial creation. There are no doubt many automatic indexing software tools, which can then be configured in a similar way to produce an acceptable result.
Perhaps the true story of the index is the fear that “the easy accessibility of the written word is the enemy of deep thought and contemplation”. You mustn’t make it too easy for people to find things in books! Remember that for much of French academic writing, an index is the exception rather than the norm.
To his credit, Duncan points out that the way we read has changed – not for better or worse, but in response to our changing situations. “The index responded to … shifts in the reading ecosystem – the rise of the novel, of the coffee-house periodical, of the scientific journal.” This doesn’t mean that we can no longer read properly, simply that we read differently today.
He differentiates between the concordance, which he calls a word index, and the subject index. His thesis is that there is a case for the back-of-the-book subject index. Has he made his case? For him the concordance, the word index, is neutral, while the subject index is interpretive (and can be wildly biased). His gesture towards the 21st century is to state that both are valid, but the subject index is still necessary.
Duncan’s argument is predictable: a Whig interpretation of indexes, perhaps. Today, of course, we know what an index is for, and we feel comfortable because other ages had such silly ideas: the Romans did not use alphabetical order. The Middle Ages thought that alphabetical order was the antithesis of reason. We, of course, know better!
But if today is the enlightened age, why are so many hand-compiled indexes so poor? Why are we so reluctant to make use of the machine? Instead of the two indexes provided in Duncan’s book, why could there not be one index, created by machine and then curated by a human? Let’s face it, humans miss things that machines do not.
The book includes sections on alphabetical order, but this is simply a digression. Alphabetical order is a bit like the qwerty ordering of a keyboard. We an use any arrangement of letters, as long as we all stick to the same rules. We sometimes drive on the left, sometimes on the right, and while it would be better to have a single rule, we can live with a few exceptions. And if the Greeks did invent alphabetical order, then why did it take so many years for alphabetical order to become standard?
I think it is missing the point somewhat to say simply that the manually created subject index is still relevant. If the author of a book on indexing can delegate the compilation of the index, it can’t be that important. Instead, there should be an engagement with automated tools available, with more attention given to how we find things today – some kind of mixture of the word index and concept index, perhaps. However interesting the history of indexing might be, the book concludes simply by praising human indexing and critiquing machine-based indexes. We should be able to make better use of the indexing tools available – and not just delegate to a human indexer and hope for the best.
Leave a Reply