Skip to:

scholarly publishing

Publishers attend their annual Rave

Yet again Rave Technologies assembled an impressive cast of speakers for their annual publishing event (London, October 2017). Despite the event being managed by a vendor, Rave resists any attempt to turn it into a corporate showcase.

This year the theme was broadly based around innovation, specifically digital innovation – you could ask if there is any innovation that is not digital, these days, but of that more below.

What can machines discover from scholarly content?

Just as you thought that everything was known about the academic user journey, a workshop comes along (the WDSM Workshop on Scholarly Web Mining, SWM 2017, held in Cambridge, February 10 2017) that presents a whole new set of tools and investigations to consider.

It was a rather frantic event, squeezing no fewer than 11 presentations into a half-day session, even if the event took place in the sumptuous and rather grand surroundings of the Council Chamber in the Cambridge Guildhall. Trying to summarise all 11 presentations would be a challenge; were there any common areas of inquiry?

How TrendMD uses collaborative filtering to show relatedness

TrendMD is (as its website states) “a content recommendation engine for scholarly publishers, which powers personalized recommendations for thousands of sites”. An interesting blog post by Matt Cockerill of TrendMD (published February 2016) claims “TrendMD’s collaborative filtering engine improves clickthrough rates 272% compared to a standard ‘similar article’ algorithm in an A/B trial”. That sounds pretty impressive.

The Journal Impact Factor and the Publishing Business

The Journal Impact Factor has been discussed, and criticized, for years. A recent Scholarly Kitchen article looks at another proposal for improving the impact factor (Optical Illusions, 21 July 2016). This is by no means the first suggested improvement to the impact factor metric – a search on Scholarly Kitchen itself reveals there are several posts on this topic each year.

Perhaps the biggest problem with the Journal Impact Factor is this. Most journals, from Nature to the smallest journal, seem to have a similar graph when number of citations are measured by individual articles in that journal. A few articles are cited a lot, followed by a very long tail of articles that get few or even zero citations. We all know this, but we persist in believing a Journal Impact Factor is in some way representative for each article in that journal.

Did anyone read my article? Did it have any impact?

Elsevier Library Connect Research Impact Metrics Cards

Any author will ask questions such as the ones above, and academic authors are no exception. In one sense, we have better answers than were possible just 20 years ago. Although thousands of copies of print books are sold per year, in those days there was little evidence coming back to the publisher that those books were actually read. In fact, one joke among publishers was that encyclopedias and bibles had one thing in common: they were more bought than read. A typical publisher would receive just a handful of comments from readers each year. As publishers, we knew the books were sold; but we didn’t know if they had ever been read. So if an author had asked us if anyone read their book, we couldn’t say.

Pages