Monday, 20 April 2015

How long does a scientific paper need to be?





©CartoonStock.com

There was an interesting exchange last week on PubMedCommons between Maurice Smith, senior author of a paper on motor learning, and Bjorn Brembs, a neurobiologist at the University of Regensburg. The main thrust of Brembs' critique was that the paper, which was presented as surprising, novel and original, failed adequately to cite the prior literature. I was impressed that Smith engaged seriously with the criticism, writing a reasoned defence of the choice of material in the literature review, and noting that claims of over-hyped statements were based on selective citation.  What really caught my attention was the following statement in his rebuttal: "We can reassure the reader that it was very painful to cut down the discussion, introduction, and citations to conform to Nature Neuroscience’s strict and rather arbitrary limits. We would personally be in favor of expanding these limits, or doing away with them entirely, but this is not our choice to make."
As it happens, this comment really struck home with me, as I had been internally grumbling about this very issue after a weekend of serious reading of background papers for a grant proposal I am preparing. I repeatedly found evidence that length limits were having a detrimental effect on scientific reporting. I think there are three problems here.
1. The first is exemplified by the debate around the motor learning paper. I don't know this area well enough to evaluate whether omissions in the literature review were serious, but I am all too familiar with papers in my own area where a brief introduction skates over the surface of past work. One feels that length limits play a big part in this but there is also another dimension: To some editors and reviewers, a paper that starts documenting how the research builds on prior work is at risk of being seen as merely 'incremental', rather than 'groundbreaking'. I was once explicitly told by an editor that too high a proportion of my references were more than five years old. This obsession with novelty is in danger of encouraging scientists to devalue serious scholarship as they zoom off to find the latest hot topic.  
2. In many journals, key details of methods are relegated to a supplement, or worse still, omitted altogether. I know that many people rejoiced when the Journal of Neuroscience declared it would no longer publish supplementary material: I thought it was a terrible decision. In most of the papers I read, the methodological detail is key to evaluating the science, and if we only get the cover story of the research, we can be seriously misled. Yes, it can be tedious to wade through supplementary material, but if it is not available, how do we know the work is sound?
3. The final issue concerns readability. One justification for strict length limits is that it is supposed to benefit readers if the authors write succinctly, without rambling on for pages and pages.  And we know that the longer the paper, the fewer people will even begin to read it, let alone get to the end. So, in principle, length limits should help. But in practice they often achieve the opposite effect, especially if we have papers reporting several experiments and using complex methods. For instance, I recently read a paper that reported, all within the space of a single Results section about 2000 words long, (a) a genetic association analysis; (b) replications of the association analysis on five independent samples (c) a study of methylation patterns; (d) a gene expression study in mice; and (e) a gene expression study in human brains. The authors had done their best to squeeze in all essential detail, though some was relegated to supplemental material, but the net result was that I came away feeling as if I had been hit around the head by a baseball bat. My sense was that the appropriate format for reporting such a study would have been a monograph, where each component of the study could be given a chapter, but of course, that would not have the kudos of a publication in a high impact journal, and arguably fewer people would read it.
Now that journals are becoming online-only, a major reason for imposing length limits – cost of physical production and distribution of a paper journal – is far less relevant. Yes, we should encourage authors to be succinct, but not so succinct that scientific communication is compromised.


Saturday, 21 March 2015

Will Elsevier say sorry?



Elsevier, the publisher of Research in Autism Spectrum Disorders (RASD) and Research in Developmental Disabilities (RIDD) is no stranger to controversy. It became the focus of a campaign in 2012 because of its pricing strategies and restrictive practices. Elsevier responded that the prices it charged were fair because of the added value it brought to the publishing process. Among other things they claimed that: "We pay editors who build a distinguished brand that is set apart from 27,000 other journals. We identify peer reviewers."
Well, the claim of added value is exploded by the recent revelations of goings-on at RASD and RIDD as documented in my recent blogposts here and here. One of the points to emerge from a detailed scrutiny of publications data is that not only was there evidence of the editor publishing numerous papers in his own journal: in addition, he was frequently bypassing the peer review process altogether. Evidence on this point is indirect, because the published journals do not document the peer review process itself. However, it is possible to look at the lag from receipt of the paper to acceptance, which can be extracted separately for each individual paper. I have looked at this further using merged data on publication lag with information available from Web of Science to create a dataset that contains all papers published in RIDD between 2004 and 2014, accompanied by the dates for receipt and acceptance of papers.
I was able to extract this information for 2060 papers. There were 23 papers for which the date information was not available: 15 of these were authored by the editor. The median publication lag for the remainder is shown by year in Figure 1.  This shows a fascinating progression.
Figure 1. Acceptance lags in days, and % papers with revision

Between 2004-2006, it seems that RIDD was a perfectly normal journal. The lag from receipt to acceptance was 3-4 months, and nearly all papers underwent some revision during that period. Fewer than 4% of papers were accepted within 3 weeks. In 2007, the sister journal RASD was launched, and the editor started to accept papers with shorter lags and without revision. During 2008-2011, the median lag between receipt and acceptance fell to two weeks or less, and only a minority of papers underwent revision.  Subsequently, this pattern of editorial behaviour was reversed to some extent, and by 2014 we were back to a median lag of 80 days with 78% of papers undergoing revision.
Nevertheless, all was not well. In my previous post, I noted that a group of people with editorial associations with these journals, Sigafoos, O'Reilly and Lancioni, had published numerous papers in RASD and RIDD. I had analysed the acceptance lags for RASD and shown they were substantially shorter than for other authors.  Figure 2 shows part of a larger figure which can be found here; it demonstrates that remarkably short acceptance times for papers with Sigafoos as author (usually accompanied by Lancioni and O'Reilly) were also seen in RIDD. The full dataset on which these figures are based is available here*.

Figure 2: Lag from receipt to acceptance for RIDD papers 2010-2014. Black dots show papers authored by Sigafoos
It is difficult to believe that nobody at Elsevier was aware of what was going on.  In 2011, at the height of the rapid turnaround times, there was a fivefold increase from the 2004 level of submissions. Many journals have grown in size over this period, but this was massive. Furthermore, the publisher was recording the dates of receipt and acceptance with each paper: did nobody actually look at what they were publishing and think that something was odd? This was not a brief hiccup: it went on for years. Either the publisher was slumbering on the job, or they were complicit with the editor.
Most academics find publishing a paper a stressful and tedious business. How nice, then, you might think, to have a journal that speeds the process along and avoids the need to grapple with reviewer comments. I have heard from several people who tell me they published a paper in RIDD; when it was accepted without review, they were surprised, but hardly in a mood to complain about it.  So does it matter?
Well, yes. It matters because RIDD and RASD are presented to the world as peer-reviewed journals, backed up by the 'distinguished brand' of Elsevier. We live in times when there is competition for jobs and prizes, and these will go to those who have plenty of publications in peer-reviewed journals, preferably with high citations. If an editor bypasses peer review and encourages self-citation, then the quality of the work in the journal is misrepresented and some people gain unfair advantages from this. The main victims here are those who published in RASD and RIDD in good faith, thinking that acceptance in the journal was a marker of quality. They will be feeling pretty bitter about the 'added value' of Elsevier right now, as the value of their own work will be degraded by association with these journals.
It is not surprising that Elsevier wants to focus on the future rather than on the past. They are a signatory to the Committee on Publication Ethics, and the discovery that two of their journals have flouted numerous of the COPE guidelines is an embarrassment that could have commercial implications. But I'm afraid that, while I understand their position, I don't think this is good enough. Those who published good work in journals where the publisher failed in its duty of oversight deserve an acknowledgement that there were problems and an apology.


* Update 23rd March 2015: Data now added for Research in Autism Spectrum Disorders in a separate Excel file.