Christos Petrou has written a detailed and highly-documented account in Scholarly Kitchen of the rise and fall of megajournals, specifically PLOS. But the conclusions he draws seem to be at variance with the principles behind PLOS.

The megajournal, a single journal that publishes content in many subject areas, using a criterion of acceptable rather than exceptional quality, has been a publishing phenomenon. PLOS ONE was one of the first and certainly the most successful. In the past few years, the number of papers it publishes has decreased substantially – around 14% in the last year measured by Mr Petrou. Of course, the reasons for this decline can only be guessed, but perhaps the most obvious reason, not mentioned by Mr Petrou, is that other megajournals, notably Science Review, have entered the market and taken some submissions away from PLOS ONE. Another megajournal, IEEE Access, covers a different subject area, as he points out, and is unlikely to have taken submissions from PLOS ONE.

Petrou’s explanation for the “decline” of PLOS ONE is rather different. Most importantly, he uses impact factor as a measure of the success or otherwise of a journal, but at the same time reveals impact factor to be “not healthy for research”; it provides “a window for questionable behavior by publishers and researchers”. So why is the journal impact factor (JIF) used as a criterion of success? This metric has been criticised for many years, as far back as a 1997 article that listed 21 problems with the JIF. One questionable result of the JIF is the well-known phenomenon that journal impact factor can be used by academics to try to boost their academic reputation, since the JIF is awarded to a journal overall, not to individual papers within the journal. If your paper is published in a high-ranking journal, then at a stroke you appear to be a more successful researcher.

The nature of JIF tends to cause a swing in papers as their official impact factors are published. As the author himself describes:

If they can achieve a high JIF, megajournals are likely to attract and publish scientifically sound but less citable content by authors that seek to benefit from connection to the high JIF. This will lead to JIF decline, which makes the journal less attractive to authors chasing the JIF, and hence fewer articles and a drop in revenue.

That seems a convincing explanation for the fall in number of articles published in PLOS ONE. After describing PLOS’s falling impact factor, Mr Petrou then goes on to state “Despite the drop in volume [of papers published], PLOS ONE remains a tremendously innovative and commercially successful journal”. He doesn’t describe any of these innovations, and all his statistics suggest the journal might be heading for failure, in his terms. It would appear that for him, a journal is a commercial vehicle, nothing more, nothing less. It appears to me that PLOS ONE is taking seriously the call to reduce the time taken for a manuscript to pass through the system. In a blog post from May 2019, Joerg Heber, Editor in Chief, gives clear statistics for the turnaround time for PLOS submissions and how the time to publication is decreasing.

So what’s wrong with PLOS? When founded, it was lauded as an innovation that would shake up academic publication. For example, the New York Times wrote, in 2003:

it can be very hard, if not impossible, to find the results of properly vetted, taxpayer-financed science — and in some cases it can be hard for your doctor to find them, too. The Public Library of Science could help change all that, creating open access to research. The publishers of scientific journals are naturally skeptical, but the real test will come in the marketplace of ideas.

Wikipedia reports that PLOS was founded “to challenge academia’s obsession with journal status and impact factors” – yet seventeen years later, Mr Petrou is lambasting PLOS for these very reasons, because of its declining impact factor.

PLOS was originally funded by grants, but became self-sustaining around 2012 (and few grant-funded initiatives of this kind ever achieve sustainability). But that wasn’t enough to silence an undercurrent of criticism from some quarters. For example, the New Yorker, that august publication, wrote sneeringly of it as recently as 2017, in an article entitled ““Paging Dr. Fraud”: The Fake Publishers That Are Ruining Science”:

The rise of the Internet introduced the open-access model—journals such as Public Library of Science, or PLOS, which are free and widely available. Some of these publications, including PLOS, charge the author a fee for the privilege of having her work peer-reviewed and published.

Today we take for granted the author-pays principle as an acceptable basis for open-access publishing, since at least it makes the published research available. There is a cost to peer review and to manage the editorial process, and subscription journals, which charge readers to access content, and have increased their charges to libraries by increments way above the rate of inflation for many years, are hardly more virtuous.

Mr Petrou’s view of PLOS are those of a would-be investor, not a researcher. Thus, he he suggests that PLOS should spend more on marketing, with a vague mention of TrendMD. On open-access journals generally, he writes:

their long-term performance has been occasionally unreliable, introducing uncertainty in an industry that has been particularly attractive to investors for its ability to generate low but sustainable growth.

And his recommendation is “avoid overselling their [megajournals’] success to investors”. In fact the main thrust of this article seems to be towards investors. But why is PLOS being assessed from an investment point of view when it is a not-for-profit publisher and remains cash-positive? And when the citation ranking system has been shown, not least in Scholarly Kitchen itself, to generate wild swings in impact factor as a result of the process itself, why is PLOS singled out for condemnation? Perhaps appeal to external investors may not be the main (or the only) criteria to use when judging academic research, and the success or otherwise of a scholarly journal. Perhaps we should look to review the citation model that produces such crazy (and unjustified) swings in author submissions.