Sunday, 30 August 2015

Opportunity cost: A new red flag for evaluating interventions for neurodevelopmental disorders

Back in 2012, I wrote a blogpost offering advice to parents who were trying to navigate their way through the jungle of alternative interventions for children with dyslexia. I suggested a set of questions that should be asked of any new intervention, and identified a set of 'red flags', i.e., things that should make people think twice before embracing a new treatment.

The need for an update came to mind as I reflected on the Arrowsmith program, an educational approach that has been around in Canada since the 1980s, but has recently taken Australia and New Zealand by storm. Despite credulous press coverage in the UK, Arrowsmith has not, as far as I know, taken off here. Australia, however, is a different story, with Arrowsmith being taken up by the Catholic Education Office in Sydney after they found 'dramatic results' in a pilot evaluation.

For those who remember the Dore programme, this seems like an action replay. Dore was big in both the UK and Australia in the period around 2007-2008. Like Dore, it used the language of neuroscience, claiming that its approach treated the underlying brain problem, rather than the symptoms of conditions such as dyslexia and ADHD. Parents were clamouring for it, it was widely promoted in the media, and many people signed up for long-term payment plans to cover a course of treatment. People like me, who worked in the area of neurodevelopmental disorders, were unimpressed by the small amount of published data on the program, and found the theoretical account of brain changes unconvincing (see this critique). However, we were largely ignored until a Four Corners documentary was made by Australian ABC, featuring critics as well as advocates of Dore. Soon after, the company collapsed, leaving both employees of Dore and many families who had signed up to long-term financial deals, high and dry. It was a thoroughly dismal episode in the history of intervention for children with neurodevelopmental problems.

With Arrowsmith, we seem to be at the start of a similar cycle in Australia. Parents, hearing about the wondrous results of the program, are lobbying for it to be made more widely available. There are even stories of parents moving to Canada so that their child can reap the benefits of Arrowsmith. Yet Arrowsmith ticks many of the 'red flags' that I blogged about, lacks any scientific evidence for efficacy, and has attracted criticism from mainstream experts in children's learning difficulties. As with Dore, the Arrowsmith people seem to have learned that if you add some sciency-sounding neuroscience terms to justify what you do, people will be impressed. It is easy to give the impression that you are doing something much more remarkable than just training skills through repetition.

They also miss the point that, as Rabbitt (2015, p 235) noted regarding brain-training in general: "Many researchers have been frustrated to find that ability on any particular skill is surprisingly specific and often does not generalise even to other quite similar situations." There's little point in training children to type numbers into a computer rapidly if all that happens is that they get better at typing numbers into a computer. For this to be a viable educational strategy, you'd need to show that this skill had knock-on effects on other learning. That hasn't been done, and all the evidence from mainstream psychology suggests it would be unusual to see such transfer of training effects.

Having failed to get a reply to a request for more information from the Catholic Education Office in Sydney, I decided to look at the evidence for the program that was cited by Arrowsmith's proponents. An ongoing study by Dr Lara Boyd of the University of British Columbia features prominently on their website, but, alas, Dr Boyd was unresponsive to an email request for more information. It would seem that in the thirty-five years Arrowsmith has been around, there have been no properly conducted trials of its effectiveness, but there are a few reports of uncontrolled studies looking at children's cognitive scores and attainments before and after the intervention. One of the most comprehensive reviews is in the D.Phil. thesis of Debra Kemp-Koo from the University of Saskatchewan in 2013. In her introduction, Dr Kemp-Koo included an account of a study of children attending the private Arrowsmith school in Toronto:
All of the students in the study completed at least one year in the Arrowsmith program with most of them completing two years and some of them completing three years. At the end of the study many students had completed their Arrowsmith studies and left for other educational pursuits. The other students had not completed their Arrowsmith studies and continued at the Arrowsmith School. Most of the students who participated in the study were taking 6 forty minute modules of Arrowsmith programming a day with 1 forty minute period a day each of English and math at the Arrowsmith School. Some of the students took only Arrowsmith programming or took four modules of Arrowsmith programming with the other half of their day spent at the Arrowsmith school or another school in academic instruction (p. 34-35; my emphasis).
Two of my original red flags concerned financial costs, but I now realise it is important to consider opportunity costs: i.e., if you enlist your child in this intervention, what opportunities are they going to miss out as a consequence? For many of the interventions I've looked at, the time investment is not negligible, but Arrowsmith seems in a league of its own. The cost of spending one to three years working on unevidenced, repetitive exercises is to miss out on substantial parts of a regular academic curriculum. As Kemp-Koo (2013) remarked:
The Arrowsmith program itself does not focus on academic instruction, although some of these students did receive some academic instruction apart from their Arrowsmith programming. The length of time away from academic instruction could increase the amount of time needed to catch up with the academic instruction these students have missed. (p. 35; my emphasis).

References
Kemp-Koo, D. (2013). A case study of the Learning Disabilities Association of Saskatchewan (LDAS) Arrowsmith Program. Doctor of Philosophy thesis, University of Saskatchewan, Saskatoon.  

Rabbitt, P. M. A. (2015). The aging mind. London and New York: Routledge.

Saturday, 11 July 2015

Publishing replication failures: some lessons from history


I recently travelled to Lismore, Ireland, to speak at the annual Robert Boyle summer school. I had been intrigued by the invitation, as it was clear this was not the usual kind of scientific meeting. The theme of Robert Boyle, who was born in Lismore Castle, was approached from very different angles, and those attending included historians of science, scientists, journalists, as well as interested members of the public. We were treated to reconstructions of some of Boyle's livelier experiments, heard wonderful Irish music, and we celebrated the installation of a plaque at Lismore Castle to honour Katherine Jones, Boyle's remarkable sister, who was also a scientist.

My talk was on the future of scientific scholarly publication, a topic that the Royal Society had explored in a series of meetings to celebrate the 350th Anniversary of the publication of Philosophical Transactions. I'm particularly interested in the extent to which current publishing culture discourages good science, and I concluded by proposing the kind of model that I recently blogged about, where the traditional science journal is no longer relevant to communicating science.

What I hadn't anticipated was the relevance of some of Boyle's writing to such contemporary themes.

Boyle, of course, didn't have to grapple with issues such as the Journal Impact Factor or Open Access payments. But some of the topics he covered are remarkably contemporary. He would have been interested in the views of Jason Mitchell, John L. Loeb Associate Professor of the Social Sciences at Harvard, who created a stir last year by writing a piece entitled "On the emptiness of failed replications". I see that the essay has now been removed from the Harvard website, but the main points can be found here*. It was initially thought to be a parody, but it seems to have been a sincere attempt at defending the thesis that "unsuccessful experiments have no meaningful scientific value." Furthermore, according to Mitchell, "Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues." I have taken issue with this standpoint in an earlier blogpost; my view is that we should not assume that a failure to replicate a result is due to fraud or malpractice, but rather should encourage replication attempts as a means of establishing which results are reproducible.

I am most grateful to Eoin Gill of Calmast for pointing me to Robert Boyle's writings on this topic, and for sending me transcripts of the most relevant bits. Boyle has two essays on "the Unsuccessfulness of Experiments" in a collection of papers entitled “Certain Physiological Essays and other Tracts”. In these he discusses (at inordinate length!) the problems that arise when an experimental result fails to replicate. He starts by noting that such unsuccessful experiments are not uncommon:
… in the serious and effectual prosecution of Experimental Philosophy, I must add one discouragement more, which will perhaps as much surprize you as dishearten you; and it is, That besides that you will find …… many of the Experiments publish'd by Authors, or related to you by the persons you converse with, false or unsuccessful, … you will meet with several Observations and Experiments, which though communicated for true by Candid Authors or undistrusted Eye-witnesses, or perhaps recommended to you by your own experience, may upon further tryal disappoint your expectation, either not at all succeeding constantly, or at least varying much from what you expected. (opening passage)
He is interested in exploring the reasons for such failure; his first explanation seems equivalent to one that those using statistical analyses are all too familiar with – a chance false positive result.
And that if you should have the luck to make an Experiment once, without being able to perform the same thing again, you might be apt to look upon such disappointments as the effects of an unfriendliness in Nature or Fortune to your particular attempts, as proceed but from a secret contingency incident to some experiments, by whomsoever they be tryed. (p. 44)
And he urges the reader not to be discouraged – replication failures happen to everyone!
…. though some of your Experiments should not always prove constant, you have divers Partners in that infelicity, who have not been discouraged by it. (p. 44)
He identifies various possible systematic reasons for such failure: a problem with skill of the experimenter, with purity of ingredients, or variation in the specific context in which the experiment is conducted. He even, implicitly, addresses statistical power, noting how one needs many observations to distinguish what is general from individual variation.
…the great variety in the number, magnitude, position, figure, &c. of the parts taken notice of by Anatomical Writers in their dissections of that one Subject the humane body, about which many errors would have been delivered by Anatomists, if the frequency of dissections had not enabled them to discern betwixt those things that are generally and uniformly found in dissected bodies, and those which are but rarely, and (if I may so speak) through some wantonness or other deviation of Nature, to be met with. (p. 94)
Because of such uncertainties, Boyle emphasises the need for replication, and the dangers of building complex theory on the basis of a single experiment:
….try those Experiments very carefully, and more than once, upon which you mean to build considerable Superstructures either theorical or practical, and to think it unsafe to rely too much upon single Experiments, especially when you have to deal in Minerals: for many to their ruine have found, that what they at first look'd upon as a happy Mineral Experiment has prov'd in the issue the most unfortunate they ever made. (p. 106)
I'm sure there are some modern scientists who must be thinking their lives may have been made much easier if they had heeded this advice. But perhaps the most relevant to the modern world, where there is such concern about the consequences of failure to replicate, are Boyle's comments on the reputational impact of publishing irreproducible results:
…if an Author that is wont to deliver things upon his own knowledge, and shews himself careful not to be deceived, and unwilling to deceive his Readers, shall deliver any thing as having try'd or seen it, which yet agrees not with our tryals of it; I think it but a piece of Equity, becoming both a Christian and a Philosopher, to think (unless we have some manifest reason to the contrary) that he set down his Experiment or Observation as he made it, though for some latent reason it does not constantly hold; and that therefore though his Experiment be not to be rely'd upon, yet his sincerity is not to be rejected. Nay, if the Author be such an one as has intentionally and really deserved well of Mankind, for my part I can be so grateful to him, as not only to forbear to distrust his Veracity, as if he had not done or seen what he says he did or saw, but to forbear to reject his Experiments, till I have tryed whether or no by some change of Circumstances they may not be brought to succeed. (p. 107)
The importance of fostering a 'no blame' culture was one theme that emerged in a recent meeting on Reproducibility and Reliability of Biomedical Research at the Academy of Medical Sciences. It seems that in this, as in so many other aspects of science, Boyle's views are well-suited to the 21st century.

For more on Robert Boyle, see here


12th July 2015: Thanks to Daniël Lakens who pointed me to the Wayback machine, where earlier versions of the article can be found:   http://web.archive.org/web/*/http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm

Friday, 3 July 2015

Bishopblog catalogue (updated 3rd July 2015)

Source: http://www.weblogcartoons.com/2008/11/23/ideas/

Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014) Our early assessments of schoolchildren are misleading and damaging (4 May 2015)

Autism
Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Genetics
Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013)

Neuroscience
Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014)

Statistics
Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011) Accentuate the negative (26 Oct 2011) Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) Novelty, interest and replicability (19 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Blogging as post-publication peer review (21 Mar 2013) A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014) Will Elsevier say sorry? (21 Mar 2015) How long does a scientific paper need to be? (20 Apr 2015) Will traditional science journals disappear? (17 May 2015) My collapse of confidence in Frontiers journals (7 Jun 2015)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014) Metricophobia among academics (28 Nov 2014) Why evaluating scientists by grant income is stupid (8 Dec 2014) Dividing up the pie in relation to REF2014 (18 Dec 2014) Journals without editors: What is going on? (1 Feb 2015) Editors behaving badly? (24 Feb 2015)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014)

Women
Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Should Rennard be reinstated? (1 June 2014) How the media spun the Tim Hunt story (24 Jun 2015)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014)

Wednesday, 24 June 2015

How the media spun the Tim Hunt story


 I had vowed not to blog about the Tim Hunt affair. I thought everything that could have been said had been said, and I'd made my own position clear in a comment on Athene Donald's blog, and in a comment in the Independent.
But then I came across Stephen Ballentyne's petition to "Bring Back Tim Hunt", and I was transported back five years to my first ever blog post on "Academic Mobbing in Cyberspace," a strange tale about sex, fruitbats and internet twittermobs. I started blogging in 2010 because I wanted to highlight how the internet encourages people to jump in to support causes without really examining the facts of the matter. The Ballentyne petition points to an uncannily similar conclusion.
Let me start out by saying I am not arguing against people's right to take Tim Hunt's side. As many people have noted, he is a well-liked man who has done amazing science and there are many women as well as men who will speak up for him as a supporter of female scientists. Many of those who support him do so in full knowledge of the facts, out of a sense of fairness and, in the case of those who know him personally, loyalty.
My concern is about the number of signatories of Ballentyne's petition who have got themselves worked up into a state of indignation on the basis of wrong information. There are three themes that run through the comments that many people have posted:
a) They think that Tim Hunt has been sacked from his job
b) They think he is 'lost to science'
c) They think University College London (UCL) fired him in response to a 'Twitter mob'.
None of these things is true. (a) Hunt is a retired scientist who was asked to resign from an honorary position.  That's shaming and unpleasant, but an order of magnitude different from being sacked and losing your source of income. (b) Hunt continues to have an affiliation to the Crick Institute – a flagship research centre that recently opened in Central London. (c) UCL are explicit that their acceptance of his resignation from an honorary position had nothing to do with the reaction on social media.
So why do people think these things? Quite simply, this is the interpretation that has been put about in many of the mainstream media. The BBC has been particularly culpable. The Today programme on Radio 4 ran a piece which started by saying Hunt had 'lost his job'. This was a couple of days after the UCL resignation, when any self-respecting journalist would have known this to be false. Many newspapers fuelled the flames. An interview with Boris Johnson on the BBC website added the fictitious detail that Hunt had been sacked by the Royal Society. He is in fact still a Fellow – he has simply been asked to step down from a Royal Society committee. It is interesting to ask why the media are so keen to promote the notion of Hunt as victim, cruelly dismissed by a politically correct university.
It's fascinating analysing the comments on the petition.  After deleting duplicates, there were 630 comments. Of those commenters where gender could be judged, 71% were male. Rather surprisingly, only 52% of commenters were from the UK, and 12% from the US, with the remainder scattered all over the world.
There were 93 comments that explicitly indicated they thought that Hunt had been sacked from his job, and/or was now 'lost to science' – and many more that called for his 'reinstatement', where it was unclear whether they were aware this was an honorary position.  They seemed to think that Hunt was dependent on UCL for his laboratory work, and that he had a teaching position. For instance, "Don't let the world lose a great scientist and teacher over a stupid joke." I would agree with them that if he had been sacked from a regular job, then UCL's action would have been disproportionate. However, he wasn't.
Various commentators drew comparisons with repressive fascist or Marxist states, e.g. "It is reminiscent of the cultural revolution in China where 'revisionist' professors were driven out of their offices by their prospective students, to do farm labour." And there was an awful lot of blaming of women, Twitter and feminism in general, with comments such as "Too much of this feminist ranting going on. Men need to get their spines back and bat it away" and "A respected and competent scientist has been hounded out of his job because of an ignorant baying twitter mob who don't happen to like his views". And my favourite: "What he said was a joke. If lesbian feminist women can't take a joke, then they are the joke." Hmm.
It's unfortunate that the spread of misinformation about Hunt's circumstances have muddied the waters in this discussion.  A minority of those commenting on Ballentyne's petition are genuine Hunt supporters who are informed of the circumstances; the bulk seem to be people who are concerned because they have believed the misinformation about what happened to Hunt; a further set are opportunistic misogynists who do Hunt no favours by using his story as a vehicle to support their dislike of women. There is a much more informed debate in the comments section on Athene Donald's blog, which I would recommend to anyone who wants to understand both sides of the story.






Sunday, 7 June 2015

My collapse of confidence in Frontiers journals



Frontiers journals have become a conspicuous presence in academic publishing since they started in 2007 with the advent of Frontiers in Neuroscience. When they were first launched, I, like many people, was suspicious. This was an Open Access (OA) online journal where authors paid to publish, raising questions about the academic rigour of the process. However, it was clear that the publishers had a number of innovative ideas that were attractive to authors, with a nice online interface and a collaborative review process that made engagement with reviewers more of a discussion than a battle with anonymous critics. Like many other online OA journals, the editorial decision to publish was based purely on an objective appraisal of the soundness of the study, not on a subjective evaluation of importance, novelty or interest. As word got round that respectable scientists were acting as editors, reviewers and authors of paper in Frontiers, people started to view it as a good way of achieving fast and relatively painless publication, with all the benefits of having the work openly available and accessible to all.
The publishing model has been highly successful. In 2007, there were 45 papers published in Frontiers in Neuroscience, whereas in 2014 it was 3,012 (data from Scopus search for source title Frontiers in Neuroscience, which includes Frontiers journals in Human Neuroscience, Cellular Neuroscience, Molecular Neuroscience, Behavioral Neuroscience, Systems Neuroscience, Integrative Neuroscience, Synaptic Neuroscience, Aging Neuroscience, Evolutionary Neuroscience and Computational Neuroscience). If all papers attracted the author fee of US$1900 (£1243) for a regular article, this would bring in £3.7 million pounds in 2014: the actual income would be less than this because some articles are cheaper, but it's clear that the income is any in case substantial, especially since the journal is online and there are no print costs. But this is just the tip of the iceberg. Frontiers has expanded massively since 2007 to include a wide range of disciplines.  A Scopus search for articles with journal title that includes "Frontiers in" found over 54,000 articles since 2006, with 10,555 published in 2014.
With success, however, have come growing rumbles of discontent. Questions are being raised about the quality of editing and reviewing in Frontiers.  My first inkling of this was a colleague told me he would not review for Frontiers because his name was published with the article. This wasn't because he wanted confidentiality; rather he was concerned that it would appear he had given approval for the article, when in fact he had major reservations.
Then, there have been some very public criticisms of editorial practices at Frontiers. The first was associated with the retraction of a paper that claimed climate denialism was associated with a more general tendency to advocate conspiracy theories. Papers on this subject are always controversial and this one was no exception, attracting complaints to the editor. The overall impression from the account in Retraction Watch was that the editor caved in to legal threats, thereby letting critics of climate change muzzle academic freedom of speech. This led to the resignation of one Frontiers editor**.
Next, there was a case that posed the opposite problem: the scientific establishment were outraged that a paper on HIV denial had been published, and argued that it should be retracted. The journal editor decided that the paper should not be retracted, but instead rebranded it as Opinion – see Retraction Watch account here.
Most recently, in May 2015 there was a massive upset when editors of the journals Frontiers in Medicine and Frontiers in Cardiovascular Medicine mounted a protest at the way the publisher was bypassing their editorial oversight and allocating papers to associate editors who could accept them without the knowledge of the editor in chief. The editors protested and published a manifesto of editorial independence, leading to 31 of them being sacked by the publisher.   
All of these events have chipped away at my confidence in Frontiers journals, but it was finally exploded completely when someone on Twitter pointed me to this article entitled "First time description of dismantling phenomenon" by Laurence Barrer and Guy Giminez from Aix Marseille Université, France. I had not realised that Frontiers in Psychology had a subsection on Psychoanalysis and Neuropsychoanalysis, but indeed it does, and here was a paper proposing a psychoanalytic account of autism. The abstract states: "The authors of this paper want to demonstrate that dismantling is the main defense mechanism in autism, bringing about de-consensus of senses." Although the authors claim to be adopting a scientific method for testing a hypothesis, it is unclear what would constitute disproof. Their evidence consists of interpreting known autistic characteristics, such as fascination with light, in psychoanalytic terms. The source of dismantling is attributed to the death drive. This reads like the worst kind of pseudoscience, with fancy terminology and concepts being used to provide evidence for a point of view which is more like a religious belief than a testable idea. I wondered who was responsible for accepting this paper.  The Editor was Valeria Vianello Dri, Head of Child and Adolescent Neuropsychiatry Units in Trento, Italy. No information on her biography is provided on the Frontiers website. She lists four publications: these are all on autism genetics. All are multi-authored and she is not first or last author on any of these*. A Google search confirmed she has an interest in psychoanalysis but I could find no further information to indicate that she had any real experience of publishing scientific papers. There were three reviewers: the first two had no publications listed on their Frontiers profiles; the third had a private profile, but a Google search on his name turned up a CV but it did not include any peer-reviewed publications.
So it seems that Frontiers has opened the door to a branch of pseudoscience to set up its own little circle of editors, reviewers and authors, who can play at publishing peer-reviewed science. I'm not saying all people with an interest in psychoanalysis should be banished: if they do proper science, they can publish that in regular journals without needing this kind of specialist outlet. But this section of Frontiers is a disastrous development; there is no evidence of scientific rigour, yet the journal gives credibility to a pernicious movement that is particularly strong in France and Argentina, which regards psychoanalysis as the preferred treatment for autism. Many experts have pointed out that this approach is not evidence-based, but worse still, in some of its manifestations it amounts to maltreatment.  What next, one wonders? Frontiers in homeopathy?
Like the protesting editors of Frontiers in Medicine, I think the combined evidence is that Frontiers has allowed the profit motive to dominate. They should be warned, however, that once they lose a reputation for publishing decent science, they are doomed. I've already heard it said that someone on a grants review panel commented that a candidate's articles in Frontiers should be disregarded. Unless these journals can recover a reputation for solid science with proper editing and peer review, they will find themselves shunned.


*The Frontiers biography suggests she is last author on a paper in 2008, but the author list proved to be incomplete.
** Correction: Shortly after I posted this, Stephan Lewandowsky wrote to say that there were 3 editors who resigned over the RF retraction, plus another one voicing intense criticism