Reading Time: 3 minutes

Nye Bevan, one of the founders of the UK’s National Health Service, and hence a very appropriate public figure for a healthcare conference, although looking here as if he was the inventor of the shopping centre.

This week I was at the Healtac 2019 conference in Cardiff. This event, only in its second year, is a good indicator of current interests in health informatics. So how has the sector changed in the last 12 months?

Most importantly, there seems to have been little progress in making clinical data available. A citizen jury was held in Brighton, which broadly favoured the use of free text in patient records for research purposes, but a citizen jury does not represent the views of the population at large. The paradox of the UK is that, unlike the USA, there is in principle a single nationwide healthcare strategy, but that does not mean that patient records are available for analytics any more than in the USA.

There seems to be an increased use of machine learning, but not always wholesale adoption. Often, ML is compared with rule-based tools, and the results are at times seen to be negative. It’s impossible to generalise, of course, but one very clear reason for ML not delivering good results is the result of using very small datasets. It was clear from the keynote presentation by Hongfang Liu that machine learning needs a very large dataset (and she gave the graphical evidence to prove it).

For example, a presentation by Beatrice Alex (University of Edinburgh) concluded that a rule-based approach was better than using machine learning. However, the study looked only at extracting a very small and precise set of terms from the data, in this case, types of stroke. There was a question after the presentation about how transferable the rules might be, which I think is a very real limitation of the rule-based approach. I’m no expert, but I think there is a danger from a study of this kind that people might interpret it as a general judgement on rule-based systems as inherently superior to machine-learning systems.

Some of the presentations reported remarkably low F1 scores – for example, Daphne Chophard described a project to automatically expand abbreviations, but the F1 score was only 0.66, with a precision of just 0.50, compared to a baseline F1 score of 0.48. It seemed to be sufficient that method A represented an improvement on methodology B because the F1 score was higher – but it didn’t look as though either of the methodologies would produce very good results overall.

One of the more interesting developments was to use some aspect of AI as part of a project, for example Amal Alharbi (University of Sheffield) described using “query adaption”, not a term I am familiar with, but which seems to  resemble the “learning to rank” tool described by some information retrieval companies to improve ranking of results from search engines.

Concerns about patient data seem to have led some projects to questionable directions, e.g. generating “synthetic” records. If the need to anonymise data leads to that data being unusable for analytics purposes, there is no point in the exercise.

The final keynote, by Hongfang Liu of the Mayo Clinic, communicated very clearly the need for combined solutions in this area. She used the term “digital health sciences” to describe the combination of data science, informatics and AI, to deliver effective solutions in this area. That trilogy of terms did not include clinical knowledge, but her gist was clear: healthcare solutions require a collaborative approach. But even in this talk you felt the need for greater clarity on recommendations. Comparing human and machine intelligence, she showed a table with strengths and weaknesses of each approach. Humans, for example, are fallible, while machines have no common sense; but she noted that machines create “legal and ethical concerns” without any corresponding strength or weakness with human intelligence. Machines may make visible the result of human bias and privacy issues, but they are symptoms rather than causes of such problems. Bias did not begin with the first machine algorithm.