Reading Time: 2 minutes

At the EAHIL Conference in Basel today, a team from Cochrane gave a workshop entitled “PICO Search: Unlocking the Cochrane data vault”. The PICO principles are a four-fold way of interpreting a clinical trial:

  • Patient, population or problem (what are the characteristics of the patient or problem?)
  • Intervention (what is the intervention examined?)
  • Comparison (what is the alternative offered, e.g. placebo?)
  • Outcome (what are the outcomes?)

The workshop session required the attendees to use a prototype PICO Finder tool to identify the PICO characteristics of four titles, the first of which was “Fast-track cardiac care for adult cardiac surgical procedures”. The interface enabled users to select terms from this title. If a term or phrase corresponded with a PICO term, it could be dragged into the relevant PICO box. But the terms available didn’t always correspond with the words you had been given.

The result was pretty poor. None of the participants I could see completed the exercise. Nobody could find anything in the list of acceptable terms for “fast-track”, the very first term. Comments included “we are confused”, and “It looks a bit difficult”. One user commented that an intervention may only be an intervention in some contexts.

As an explanation of why the task had been so difficult, the presenter stated something about this interface being only for expert searchers, and showed us a completely different interface, which looked potentially much better. We were told that the task of building the vocabulary on which this first interface depended was not yet complete.  Clearly the PICO Finder tool requires an expert to build in advance a full vocabulary of all the preferred terms from systematic reviews, which strikes me as a very labour-intensive exercise.

Despite all the negative comments, a speaker then asked the audience who would use it with their students – an unlikely proposition. Clearly there was no way this interface would be used further.  

The PICO approach looks to me to be very sensible. Of course interfaces need to be tested. However, a good idea seemed here to become the victim of a poor and unthought-through interface, and a waste of a willing and interested audience. The interface was unsuccessful, no “data vault” was unlocked, yet there was no apology from the presenters. Humans resent being given a limited number of choices that don’t allow for the information given to them. At the end, as if to comfort us, they told us this interface would not be in the final product. You could ask why 40 users were asked to test it in a public session.