Saturday, 21 March 2015

Will Elsevier say sorry?



Elsevier, the publisher of Research in Autism Spectrum Disorders (RASD) and Research in Developmental Disabilities (RIDD) is no stranger to controversy. It became the focus of a campaign in 2012 because of its pricing strategies and restrictive practices. Elsevier responded that the prices it charged were fair because of the added value it brought to the publishing process. Among other things they claimed that: "We pay editors who build a distinguished brand that is set apart from 27,000 other journals. We identify peer reviewers."
Well, the claim of added value is exploded by the recent revelations of goings-on at RASD and RIDD as documented in my recent blogposts here and here. One of the points to emerge from a detailed scrutiny of publications data is that not only was there evidence of the editor publishing numerous papers in his own journal: in addition, he was frequently bypassing the peer review process altogether. Evidence on this point is indirect, because the published journals do not document the peer review process itself. However, it is possible to look at the lag from receipt of the paper to acceptance, which can be extracted separately for each individual paper. I have looked at this further using merged data on publication lag with information available from Web of Science to create a dataset that contains all papers published in RIDD between 2004 and 2014, accompanied by the dates for receipt and acceptance of papers.
I was able to extract this information for 2060 papers. There were 23 papers for which the date information was not available: 15 of these were authored by the editor. The median publication lag for the remainder is shown by year in Figure 1.  This shows a fascinating progression.
Figure 1. Acceptance lags in days, and % papers with revision

Between 2004-2006, it seems that RIDD was a perfectly normal journal. The lag from receipt to acceptance was 3-4 months, and nearly all papers underwent some revision during that period. Fewer than 4% of papers were accepted within 3 weeks. In 2007, the sister journal RASD was launched, and the editor started to accept papers with shorter lags and without revision. During 2008-2011, the median lag between receipt and acceptance fell to two weeks or less, and only a minority of papers underwent revision.  Subsequently, this pattern of editorial behaviour was reversed to some extent, and by 2014 we were back to a median lag of 80 days with 78% of papers undergoing revision.
Nevertheless, all was not well. In my previous post, I noted that a group of people with editorial associations with these journals, Sigafoos, O'Reilly and Lancioni, had published numerous papers in RASD and RIDD. I had analysed the acceptance lags for RASD and shown they were substantially shorter than for other authors.  Figure 2 shows part of a larger figure which can be found here; it demonstrates that remarkably short acceptance times for papers with Sigafoos as author (usually accompanied by Lancioni and O'Reilly) were also seen in RIDD. The full dataset on which these figures are based is available here*.

Figure 2: Lag from receipt to acceptance for RIDD papers 2010-2014. Black dots show papers authored by Sigafoos
It is difficult to believe that nobody at Elsevier was aware of what was going on.  In 2011, at the height of the rapid turnaround times, there was a fivefold increase from the 2004 level of submissions. Many journals have grown in size over this period, but this was massive. Furthermore, the publisher was recording the dates of receipt and acceptance with each paper: did nobody actually look at what they were publishing and think that something was odd? This was not a brief hiccup: it went on for years. Either the publisher was slumbering on the job, or they were complicit with the editor.
Most academics find publishing a paper a stressful and tedious business. How nice, then, you might think, to have a journal that speeds the process along and avoids the need to grapple with reviewer comments. I have heard from several people who tell me they published a paper in RIDD; when it was accepted without review, they were surprised, but hardly in a mood to complain about it.  So does it matter?
Well, yes. It matters because RIDD and RASD are presented to the world as peer-reviewed journals, backed up by the 'distinguished brand' of Elsevier. We live in times when there is competition for jobs and prizes, and these will go to those who have plenty of publications in peer-reviewed journals, preferably with high citations. If an editor bypasses peer review and encourages self-citation, then the quality of the work in the journal is misrepresented and some people gain unfair advantages from this. The main victims here are those who published in RASD and RIDD in good faith, thinking that acceptance in the journal was a marker of quality. They will be feeling pretty bitter about the 'added value' of Elsevier right now, as the value of their own work will be degraded by association with these journals.
It is not surprising that Elsevier wants to focus on the future rather than on the past. They are a signatory to the Committee on Publication Ethics, and the discovery that two of their journals have flouted numerous of the COPE guidelines is an embarrassment that could have commercial implications. But I'm afraid that, while I understand their position, I don't think this is good enough. Those who published good work in journals where the publisher failed in its duty of oversight deserve an acknowledgement that there were problems and an apology.


* Update 23rd March 2015: Data now added for Research in Autism Spectrum Disorders in a separate Excel file.






Sunday, 15 March 2015

Bishopblog catalogue (updated 15th March 2015)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014)

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011) Accentuate the negative (26 Oct 2011) Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) Novelty, interest and replicability (19 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Blogging as post-publication peer review (21 Mar 2013) A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Men! what you can do to improve the lot of women ( 25 Feb 2014) Should Rennard be reinstated? (1 June 2014)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014)

Tuesday, 24 February 2015

Editors behaving badly?


The H-index is a metric that was devised to identify talented individuals whose published work had made a significant impact on the field (Hirsch, 2005). One of its apparent virtues was that it was relatively difficult to game. However, analysis of publications in a group of journals in the field of developmental disabilities suggest there has been a systematic and audacious attempt at gaming the H-index by a cabal of editors.

What's the evidence for this claim? Let's start by briefly explaining what the H-index is. It's computed by rank ordering a set of publications in terms of their citation count, and identifying the point where the rank order exceeds the number of citations. So if a person has an H-index of 20 this means that they've published 20 papers with at least 20 citations – but their 21st paper (if there is one) had fewer than 21 citations.

The reason this is reckoned to be relatively impervious to gaming is that authors don't, in general, have much too much control over whether their papers get published in the first place, and how many citations their published papers get: that's down to other people. You can, of course, cite yourself, but most reviewers and editors would spot if an author was citing themselves inappropriately, and would tell them to remove otiose references. Nevertheless, self-citation is an issue. Another issue is superfluous authorship: if I were to have an agreement with another author that we'd always put each other down as authors on our papers, then both our H-indices would benefit from any citations that our papers attracted. In principle, both these tricks could be dealt with: e.g, by omitting self-citations from the H-index computation, and by dividing the number of citations by the number of authors before computing the H-index. In practice, though, this is not usually done, and the H-index is widely used when making hiring and promotion decisions.

In my previous blogpost, I described unusual editorial practices at two journals – Research in Developmental Disabilities and Research in Autism Spectrum Disorders – that had led to the editor, Johnny Matson achieving an H-index on Web of Science of 59. (Since I wrote that blogpost it's risen to 60). That impressive H-index was based in part, however, on Matson publishing numerous papers in his own journals, and engaging in self-citation at a rate that was at least five times higher than typical for productive researchers in his field.

It seems, though, that this is just the tip of a large and ugly iceberg. When looking at Matson's publications, I found two other journals where he published an unusual number of papers: Developmental Neurorehabilitation (DN) and Journal of Developmental and Physical Disabilities (JDPD). JDPD does not publish dates of submission and acceptance for its papers, but DN does, and I found that for 32 papers co-authored by Matson in this journal between 2010 and 2014 for which the information was available, the median lag from a paper being received and it being accepted was one day. So it seemed a good idea to look at the editors of DN and JDPD. What I found was a very cosy relationship between editors of all four journals.

Figure 1 shows associate editors and editorial board members who have published a lot in some or all of the four journals. It is clear that, just as Matson published frequently in DN and JDPD, so too did the editors of DN and JDPD publish frequently in RASD and RIDD. Looking at some of the publications, it was also evident that these other editors also frequently co-authored papers. For instance, over a four-year period (2010-2014) there were 140 papers co-authored by Mark O'Reilly, Jeff Sigafoos, and Guiliano Lancioni. Interestingly, Matson did not co-author with this group, but he frequently accepted their papers in his journals.

Figure 1: N papers authored by each individual 2010-2014 for 4 journals.
Orange denotes main editor, yellow associate editor, and tan a member of editorial board. Sigafoos moved from editor to editorial board of DN in this period.

Figure 2 shows the distribution of publication lags for the 140 papers in RASD and RISS where authors included the O'Reilly, Sigafoos and Lancioni trio. This shows the number of days between the paper being received by the journal and its acceptance. For anything less than a fortnight it is implausible that there could have been peer review.

Figure 2
Lag from paper received to acceptance (days) for 73 papers co-authored by Sigafoos, Lancioni and O'Reilly, 2010-2014

Papers by this trio of authors were not only accepted with breathtaking speed: they were also often on topics that seem rather remote from 'developmental disabilities', such as post-coma behaviour, amyotrophic lateral sclerosis, and Alzheimer's disease. Many were review papers and others were in effect case reports based on two or three patients. The content was so slender that it was often hard to see why the input of three experts from different continents was required. Although none of these three authors achieved Matson's astounding rate of 54% self-citations, they all self-cited at well above normal rates: Lancioni at 32%, O'Reilly at 31% and Sigafoos at 26%.  It's hard to see what explanation there could be for this pattern of behaviour other than a deliberate attempt to boost the H-index. All three authors have a lifetime H-index of 24 or over.

One has to ask whether the publishers of these journals were asleep on the job, not to notice the remarkable turnaround of papers from the same small group of people. In the Comments on my previous post, Michael Osuch, a representative of Elsevier, reassured me that "Under Dr Matson’s editorship of both RIDD and RASD all accepted papers were reviewed, and papers on which Dr Matson was an author were handled by one of the former Associate Editors." I queried this because I was aware of cases of papers being accepted without peer review and asked if the publisher had checked the files: something that should be easy in these days of electronic journal management. I was told "Yes, we have looked at the files. In a minority of cases, Dr Matson acted as sole referee." My only response to this is, see Figure 2.

Reference  
Hirsch, J. (2005). An index to quantify an individual's scientific research output Proceedings of the National Academy of Sciences, 102 (46), 16569-16572 DOI: 10.1073/pnas.0507655102

P. S. I have now posted the raw data on which these analyses are based here.

P.P.S. 1st March 2015


Some of those commenting on this blogpost have argued that I am behaving unfairly in singling out specific authors for criticism. Their argument is that many people were getting papers published in RASD and RIDD with very short time lags, so why should I pick on Sigafoos, O'Reilly, and Lancioni?

I should note first of all that the argument 'everyone's doing it' is not a very good one. It would seem that this field has some pretty weird standards if it is regarded as normal to have papers published without peer review in journals that are thought to be peer-reviewed.
Be this as it may, since some people don't seem to have understood the blogpost, let me state more explicitly the reasons why I have singled out these three individuals. Their situation is different from others who have achieved easy reviewer-free publication in that:
1. Their publications don't only seem to get into RIDD and RASD very quickly; they also are of a quite staggering quantity – they eclipse all other authors other than Matson. It's easy to get the stats from Scopus, so I am showing the relevant graphs here, for RIDD/RASD together and also for the other two journals I focused on, Developmental Neurorehabilitation and JDPD.

Top 10 authors RASD/RIDD 2010-2014: from Scopus


Top 10 authors 2010-2014: Developmental Neurorehabilitation (from Scopus)

Top 10 authors 2010-2014: JDPD (from Scopus)

2. All three have played editorial roles for some of these four journals. Sigafoos was previously editor at Developmental Neurorehabilitation, and until recently was listed as associate editor at RASD and JDPD.  O'Reilly is editor of JDPD, is on the editorial board of Developmental Neurorehabilitation and was until 2015 on the editorial board of RIDD.  Lancioni was until 2015 an associate editor of RIDD.
Now if it is the case that Matson was accepting papers for RASD/RIDD without peer review (and even my critics seem to accept that was happening), as well as publishing a high volume of his own papers in those journals, then that is something that is certainly not normally accepted behaviour by an editor. The reasons why journals have editorial boards is precisely to ensure that the journal is run properly. If these editors were aware that you could get loads of papers into RASD/RIDD without peer review, then their reaction should have been to query the practice, not to take advantage of it. Allowing it to continue has put the reputation of these journals at risk. You might say why didn't I include other associate editors or board members? Well, for a start none of them was quite so prolific in using RASD/RIDD as a publication outlet, and, if my own experience is anything to go by, it seems possible that some of them were unaware that they were even listed as playing an editorial role.
Far from raising questions with Matson about the lack of peer review in his journals, O'Reilly and Sigafoos appear to have encouraged him to publish in the journals they edited. Information about publication lag is not available for JDPD; in Developmental Neurorehabilitation, Matson's papers were being accepted with such lightning speed as to preclude peer review.
Being an editor is a high status role that provides many opportunities but also carries responsibilities. My case is that these were not taken seriously and this has caused this whole field of study to suffer a major loss of credibility.
I note that there are plans to take complaints about my behaviour to the Vice Chancellor at the University of Oxford. I'm sure he'll be very interested to hear from complainants and astonished to learn about what passes for acceptable publication practices in this field.
  

P.P.P.S 7th March 2015
I note from the comments that there are those who think that I should not criticise the trio of Sigafoos, O'Reilly and Lancioni for having numerous papers published in RASD and RIDD with  remarkably short acceptance times, because others had papers accepted in these journals with equally short lags between submission and acceptance. I've been accused of cherry-picking data to try and make a case that these three were gaming the system.
As noted above, I think that to repeatedly submit work to a journal knowing that it will be published without peer review, while giving the impression that it is peer-reviewed (and hence eligible for inclusion in metrics such as H-index) is unacceptable in absolute terms, regardless of who else is doing it. It is particularly problematic in someone who has been given editorial responsibility. However, it is undoubtedly true that rapid acceptance of papers was not uncommon under Matson's editorship. I know this both from people who've emailed me personally about it, and there are also brave people who mention this in the Comments. However, most of these people were not gaming the system: they were surprised to find such easy acceptance, but didn't go on to submit numerous papers to RASD and RIDD once they became aware of it.
So should I do an analysis to show that, even by the lax editorial standards of RASD/RIDD, Sigafoos/O'Reilly/Lancioni (SOL) papers had preferential treatment? Personally I don't think it is necessary, but to satisfy complainants, I have done such an analysis. Here's the logic. If SOL are given preferential treatment, then we should find that the acceptance lag for their papers is less than for papers by other authors published around the same time. Accordingly, I searched on Web of Science for papers published in RASD during the period 2010-2014. For each paper authored by Sigafoos, I took as a 'control' paper the next paper in the Web of Science list that was not authored by any of the six individuals listed in Table 1 above, and checked its acceptance lag. There were 20 papers by Sigafoos: 18 of these were also co-authored by O'Reilly and Lancioni, and so I had already got their acceptance lag data. I added the data for the two additional Sigafoos papers. For one paper, data on acceptance lag was not provided, so this left 19 matched pairs of papers. For those authored by Sigafoos and colleagues, the median lag was 4 days.  For papers with other authors, the median lag was 65 days. The difference is highly significant on matched pairs t-test; t = 3.19, p = .005. The data on which this analysis was based can be found here.

I dare say someone will now say I have cherrypicked data because I only analysed RASD and not RIDD papers. To that I would reply, the evidence for preferential treatment is so strong that if you want to argue it did not occur, it is up to you to do the analysis. Be warned, checking acceptance lags is very tedious work.