
The STM Startup Fair (London, 6 December 2022), billed by STM as “our first ever”, was a fascinating event. There were 23 startups, each given five minutes to present what they do, in an innovative format that dispensed with the usual seating arrangement and replaced it with the startup presentations at one side of the hall, and the rest of the space occupied by desks, one per startup. With this format, you could attend the presentations, chat to the startups, or just chat to each other, and wander about. This informal feel certainly worked for me, as it provided a relaxed environment that facilitated conversation.
Nonetheless, there was a lot of wasted time. Every one of the 23 startups was presented twice, and as if that wasn’t enough, five of the organisations, who were chosen as finalists for the Vesalius Innovation Award, got another chance to pitch. These finalists were interviewed, but the questions were hardly very probing.
How to evaluate all these startups? That is difficult. Dividing them by functionality was a good start. One argument is that if several tools are providing the same functionality, then clearly there is a market need. Hence, there were four companies providing image manipulation checking, five companies offering peer reviewer finder tools, three companies (hum PSI Metrics, and SciScore) provided analytics, and so on.
Another perspective might be what I would call the “MBA factor” – how slickly presented the business model was. On this score, hum (an analytics platform) would come top. Their presentation was by John Challice, whose title, Senior VP of Business Development, gave an indication of the kind of pitch. Many of the other startups, you feel, didn’t have a VP of anything; one or two of them wouldn’t know what a VP was. Of course, the quality of presentation may not be related to the quality of the startup, but for some of the more basic presentations, I would expect to have a great deal more work to get the tool integrated to any existing workflow.
Or you could assess the startups by actual figures they revealed – not the marketing figures about the total addressable market, but the figures stating usage. Some of the startups were a substantial size, others were smaller: one, Radioteraquiz, boasted of having just 292 subscribers. It didn’t make what they do less valid, but clearly, they have a long haul ahead before having any kind of influence.
A fourth parameter, perhaps equally valid, is a moral one: is this tool doing useful things? On this reckoning, Audemic, which provides audio versions of research articles, is a clear winner. So too is SciScore, making research more reproducible.
Another parameter is funding and payment model. Many of these startups were free, but free as in “free until we can charge you”, and it wasn’t always clear what the long-term business model was. Some of them used the familiar phrase “total addressable market”, like ScientifyRESEARCH, which provides a database of funding opportunities, so it is clear their model is to go after that pot of gold.
Or you could simply assess the startups by their name. On this reckoning, ImageTwin scores well – it does what it says on the tin. By contrast, the startup assessment consultancy Park56 was named after the founder’s address. Makes sense to him, but not quite so memorable to the rest of us.
At the end of the day, five of the startups were on the shortlist for the Karger Vesalius Innovation Award, worth $15K to the winner. In addition, there was a popularity award, which could be voted on by the attendees. For what it is worth, the winner of the Vesalius Innovation Award, sponsored by Karger and Molecular Connections, was ImageTwin, an image manipulation checker; the most popular presentation award went to Prophy, a referee finder. But to be honest, I did not gain more information from the presentations than I already had by reading the short account of each startup on the event website.
Was this sufficient to evaluate the startups? Sadly not. Of the four image checking companies, one was awarded the Innovation prize, and I’m sure they deserved the award, but there was nothing in the presentations or the information provided that enabled us in the audience to make any judgment on the quality of its tools compared to the other three companies in that space. Of course, a thorough appraisal would take a lot longer than one day, but would have been more useful.
There was a wide variation that could be gleamed, not from what we were shown, but what we already knew. GetFTR, for example, is hardly a startup, having been launched in 2019. It is more an industry-wide initiative than a startup (none the worse for that, but to be assessed in a different way to all the other startups here).
Park56, a startup consultancy that provided assessment of other startups, raised a valid point in talking about workflows. Many of these tools were point solutions, solving a problem such as checking for image manipulation, but at the same time creating a problem in that they needed the publisher to integrate the tool into their workflow.
Conclusions
So, was this event useful to assess startups? Yes, but for me the event worked because I was able to talk directly to the founders, rather than listening to their presentations. I would probably organise this event differently to make it more useful:
- Only include startups that had been trading for at least 12 months and who had customers – in other words, they have already passed the first credibility test;
- Give the startups more of a challenge, not just the totally arbitrary restriction of having to present to a background of slides that change every 15 seconds teaches us nothing. A more challenging interview format might be better; getting customers to comment on the startups would be another angle.
- Communicate to the participants some at least of the judging panel criteria and findings in the explanation of the results. We had to take the verdict entirely on trust.
Which were my tips to follow? I would be interested to follow three startups because they have an idea combined with some maturity: SciFlow, SciScore, and hum. I would certainly use a consultancy like Park56 to check out the credibility of startups before engaging them. Most importantly, I would check the total cost of using the tool – not just the purchase price, but the cost of integrating to the existing workflow. But perhaps the biggest takeaway from the day was sharing the feeling of excitement at seeing something new taking shape, at seeing innovation happen. You don’t get that at more established companies.
Leave a Reply