Sunday 20 March 2011

The expansion of research regulators: an evolutionary perspective

Reading about evolution has made me think about why some professions grow and thrive while others die out. I'm intrigued by the expansion in numbers of people regulating the activities of researchers. How have we got to a position where the Academy of Medical Sciences concludes: “A complex and bureaucratic regulatory environment is stifling health research in the UK”?

Consider the situation in the 1970s. If you wanted to do a piece of research you did it, no questions asked. But bad things can happen if you let people do just what they want. There are terrible examples of studies where research participants were infected, hurt or humiliated without realising what was happening or giving their consent. For examples, see Rebecca Skloot’s book, ‘The Immortal Life of Henrietta Lacks’ and Dominic Streatfeild’s ‘Brainwash: the Secret History of Mind Control’. The solution was to create a body of people, the regulators, who would scrutinise research and make sure it was ethical. Despite the regulation, every few years something bad still happened. The regulators responded by increasing their numbers and adding more regulations. In general, I’ve avoided doing studies that require me to go through an NHS ethics committee because the process is so long-winded and bureaucratic that it saps all my enthusiasm and takes up time I’d rather spend doing research. Our University ethics committee can approve studies that don’t involve patients and operates a much less complex system. Recently, though, I badly wanted to do a study involving NHS patients, and decided to grit my teeth and go through the process. It’s taken literally weeks of form-filling, and what amazed me was the sheer number of regulators I dealt with in the course of applying for approval. Then there was one set of people from the research ethics committee (REC), another set from R&D, yet more from the Comprehensive Clinical Research Network - in fact several sets of those depending on whether you were concerned with local or regional matters. There are people whose job it is to book your application in to a REC via a centralised system, and others whose job it is to do the same thing at local level when the first fail to find you a slot. These people were typically very helpful, but that isn’t the point. Why are there so many people whose sole function in life is the ethical scrutiny of researchers? Why are there so many forms to fill in that a recent article raised concerns about the environmental impact of paper use by RECs.  And how have we got to the situation described by the Academy of Medical Sciences whereby it takes an average of 621 days from receiving funding to recruiting the first patient for a trial of a cancer drug? 

From an evolutionary perspective, a research regulator is a life form with three very interesting characteristics. First, its numbers explode in response to catastrophic events regardless of how rare that event is. Second, it has few natural predators, so its expansion goes unchecked. Third, regulators multiply like bacteria: they spawn more regulations which require more regulators, so there is a rapid increase in population over time. And these three characteristics derive, I submit, from a basic human tendency to focus on emotionally-engaging events while ignoring their probability.

Catastrophe as a driving force in increasing the number of regulators
When something really terrible happens - someone is badly hurt or upset, or even worse, killed - we empathise with the victim and want to do something to prevent it happening again. All of our attention is taken up by the awfulness of the event, and we ignore the costs inherent in a solution. This kind of thinking is described by Dan Gardner in his book ‘Risk’ as due to System 1, or Gut, as opposed to the more rational System 2, or Head. Gut’s supremacy is such that if someone were to draw attention to rarity of the catastrophe or the costs of the proposed solution, they would be criticised for being heartless. It is this way of reasoning that fosters the dramatic rise in regulators.

Consider the case of Dr Harold Shipman, a general practitioner in Greater Manchester, who in 2000 was found guilty of murdering 15 of his patients. According to Wikipedia, his was one of the most prolific known serial killers in global history with 215 murders being positively ascribed to him, although the real number is likely to be higher than this. He had no obvious motive and did not appear mentally ill to his colleagues or patients. This case led to the Shipman enquiry, led by Dame Janet Smith.  It was discovered that Shipman had been sent a warning letter by the GMC but allowed to return to practice after a conviction for dishonestly obtaining pethidine in 1976.  The enquiry judged, however, that if a harsher punishment had been given, it would not have prevented Shipman from becoming a serial killer. Nevertheless, the committee called for a database to be established containing information about all doctors in the NHS, including disciplinary records, which both patients and NHS bodies could access.  They also supported a system of revalidation, whereby doctors would undergo regular checks of their competency to practise.  There is no indication that anyone ever discussed the probability of another Harold Shipman occurring. I’m sure there are many doctors who are incompetent, have massive personal problems, and there are no doubt a few who feel like murdering their patients from time to time. But I find it hard to believe that we need to set up regulations to scrutinise apparently sane doctors to prevent them from murdering their patients in cold blood.  Nevertheless, in the interests of ‘this must never happen again’ it’s recommended that a whole new posse of regulators be created to check family doctors, whose compliance will no doubt cost time that could be spent with their patients. I suspect one day there will be another doctor who does something really terrible, but I doubt these regulations would prevent this.

A much less dramatic but pertinent example was described on Jenny Rohn’s blog. I recommend you read her account of the regulations produced by her research funder that require that staff in the laboratory wear safety glasses at all times. Jenny, a woman after my own heart, took the trouble to get to the bottom of why this regulation had been introduced, and found there had been a small number of accidents, which could have been prevented if the scientist had taken commonsense precautions and worn safety glasses while performing specific hazardous procedures. A reminder to staff to do this should have been sufficient. Instead, a regulation has been introduced which costs time and money.

Regulators have no natural predators
Once regulation is established, it is remarkably difficult to remove it. This is largely a consequence of the same human tendency as discussed above: the attentional focus on catastrophe. Anyone who argues against regulation will be seen as being so cold-hearted or cavalier as to not care about the catastrophe that led to regulation being set up.

A key point here is that individual regulations often appear trivial - especially when considered in relation to the catastrophes they are designed to avert. Filling in a form, or going to an opticians is tedious, but it seems curmudgeonly to complain if someone’s life or sight can be saved. However, there are expenses in both time and money, and these can become substantial if large numbers of people are required to adhere to regulations and to administer them. We do need to consider carefully whether the measures that are put in place are effective and proportionate.

Consider another example. On 4th August 2002, 10-year-olds Holly Wells and Jessica Chapman were murdered by their school caretaker Ian Huntley in the village of Soham, Cambridgeshire. Huntley had a string of previous allegations about sexual interest in young girls when previously in the North East of England, as well as a burglary charge, but only the burglary charge was placed on the police national computer, and even this was not picked up by the routine checks that the school did, because Huntley had changed his name.  After this case, there has been massive tightening up of police checks for people who work with children. If you plan to work with children or young people, you need a Criminal Records Bureau (CRB) check.  My researcher team works in schools and we all have CRB checks. Recently, though, we’ve found some head teachers will want a new CRB check, just for their school, even if you have recently obtained one. And the regulations have been extended to individuals such as children’s authors who make occasional visits to schools.  Everyone is clearly very nervous about letting unvetted adults come into contact with children. But does it work? On 1st October 2009, Plymouth nursery worker Vanessa George admitted 13 charges of sexual abuse of children and making and distributing indecent images of children. She had completed a qualification in child care and passed a Criminal Records Bureau police check to allow her to work with younger children. 

I am aware that if I query the usefulness of the CRB check procedures it will look as if I am placing my own personal inconvenience above the welfare of vulnerable children. Regulation is tedious, and sometimes costly, but what monster would refuse to fill in a form or pay a few pounds in order to prevent a child being murdered? I can assure readers that I feel every bit as much rage and grief as anyone else every time I see that photo of Holly and Jessica that is so often reproduced in the media. If something can be done to stop children getting murdered or molested, I would be the first to endorse it. I just query whether this massive bureaucratic exercise is a cost-effective solution, as compared, say, with using resources to teach children how to identify and respond to adults who behave inappropriately.

Ultimately, the only thing that could lead to a mass extinction of regulators would be if government were to decide that the regulation was too expensive. However, in general, governments are nervous of deregulation because it will upset people who see regulation as the path to preventing another catastrophe. I disagree. It’s my belief that we can never control life so that there are no catastrophes. So from time to time bad things will happen. Every time they do, more regulators are created, but none are ever removed. Their inexorable rise seems inevitable. But as if this were not enough, there is an additional process at work.

Regulators generate more regulators
In the field of ethical scrutiny of research, a major shake-up was spawned by one rare event, the discovery that a pathologist at Alder Hey Children’s Hospital had stored body organs of deceased children without their parents’ knowledge or consent.  This lead to an explosion of regulation, and a new legal framework for the use of human tissue. But perhaps more surprisingly, it was accompanied by a broadening of the remit of research regulation to apply not just to medical research but to all research involving human participants. I suggest that a driving force here is the regulator mindset. Once you have been set up to prevent catastrophic events, you don’t just focus on the original catastrophe that started the ball rolling, you start trying to anticipate catastrophes, so you can set regulations in place to prevent them. This inevitably generates huge amounts of additional regulation. I thoroughly agree with the idea that it is better to anticipate problems than deal with their consequences, but the difficulty here is that the potential catastrophe absorbs all one’s attention and once again its probability is never considered. 

To take an example, some years ago I was part of a research group who wanted to recruit from a local maternity hospital; we simply wanted to sign up mothers who might potentially be interested in taking part in research when their children was 12 to 36 months of age. At that point, they would be contacted and invited to take part, with no obligation to do so. We did not approach mothers of babies who had any medical problems. One member of the ethics committee was concerned at our procedures. It was suggested that before writing to these parents to invite them to take part in a study, we ought to check with the family doctor whether the child had died. Well, of course, I can imagine it would be awful to receive a letter inviting you to involve your child in a study if the child had died. But what proportion of healthy babies die by 2 or 3 years of age? Is the probability of this happening high enough to justify asking family doctors of some hundred children to check their medical records before we contacted them? According to the Office of National Statistics, the mortality rate for children aged 1 to 14 years was 12 deaths per 100,000 in 2009. Deaths in children under one year of age were more common at 4.5 per 1,000 live births, but most of these were babies who would not have been recruited to our study because they were severely ill in the first week of life, and/or had very low birthweight. I am again uncomfortably aware that I will seem heartless in arguing against adopting a measure to avoid the real but rare possibility of upsetting a bereaved parent. But against this hypothetical risk we need to balance the 100 patients that won’t get seen by their family doctor in the 10 minutes it takes them to locate and check the medical records and reply to the researcher.

In case you imagine such scenarios are unique to the UK, let me give one more example, from the USA. A colleague who does brain-scanning studies of children with developmental disorders tells me she was required to conduct a pregnancy test with any girl aged 9 years or over who wished to participate in the study. This is particularly striking because it is protecting against a conjunction of two very rare possibilities: (a) that a 9-year-old girl who volunteers for a research study might be pregnant and (b) that a brain scan of the mother's head might damage a foetus.

When regulators get together with lawyers, there’s a catalytic reaction, because lawyers are even better than regulators at thinking of things that need regulating. They go beyond defending us from catastrophes and disasters to protecting against things that have the potential to upset a few people. I was interested to read in the report by the Academy of Medical Sciences that the NHS Litigation Authority had never received a claim relating to research, yet the lawyers insist we put paragraphs in our information sheets about risk, indemnity and how to make a complaint. They’ve also had a major success defending people’s rights not to have their medical records scrutinised by anyone outside the clinical care team. The problem is that if you want to do a medical research study you need to identify suitable people to take part, and that means looking at their medical records. If you insist, as current regulation requires, that no-one outside the clinical care team can look at records, you have two stark options. Either the clinical care team have to spend time not caring for patients, but trawling through records, or the research cannot get done. Richard Doll was one of the first to speak out against this kind of regulation, which makes most epidemiological studies impossible to do.  This is one point where the report by the Academy of Medical Sciences has a recommendation to allow bona fide researchers to screen medical records.

Another factor leading to multiplication of regulators is an attitude of trusting no-one. Having required researchers to give a detailed account of what their research involves, down to the last comma in an information sheet, they need squads of highly trained people to scrutinise the forms to identify possible problems. And if this is not enough, they then introduce a further stage of monitoring the research. The implication is that the researchers can’t be trusted. Unless they write regular reports to the regulators on the progress of the research, they are likely to go off the rails. Even this is not enough: the regulators also have power to visit researchers to ensure they are doing what they said they would do. This of course all creates more jobs for the regulators. Nobody ever asks whether the money might be better spent on, for instance, doing research.

How can we retrieve the situation?
Having seen the increasing drive for more and more regulation during my lifetime, I am alarmed at its unstoppable progress. I’ve focused here on those aspects that have impinged on my life as a researcher, but the trend for ever more regulation appears to infest many other areas of life. I have two suggestions for how to improve matters:

a) Before any new regulation is introduced, there should be a cold-blooded cost-benefit analysis that considers (i) the severity of the adverse event that the regulation is designed to avert; (ii) the probability of the adverse event;  (iii) the likely impact of the regulation in reducing that probability; (iv) the cost of the regulation both in terms of the salaries of people who implement it, and the time and other costs to those affected by it. I use the word cold-blooded deliberately: our normal human instincts don’t lead us to weigh up these different factors rationally. Instead, we focus solely on (i).

b) We should be more imaginative about the type of regulation that is used. For instance, research regulators increasingly play a role in training researchers in ethical conduct of research. Currently one is expected to undertake such training in addition to all the form-filling. But why not treat it more like a driving test? Once trained, researchers could be certified as competent and left to get on with it without having to fill in any forms, and without constant scrutiny and monitoring. It would save huge amounts of everyone’s time and money if we could trust people to behave professionally and treat ethical skills more like driving skills. The regulators could then focus on training researchers and offering advice to those who encountered specific ethical issues in their research. Their role would become advisory rather than policing.

I had intended to write a blogpost documenting the many stages I have gone through on the road to seeking ethics approval for my current study, but that procedure, started in December, is continuing, and I cannot tell when it will end.

Friday 11 March 2011

The one hour lecture: How to captivate your audience in ten easy steps


1. Don’t rehearse

2. Have at least 100 slides

3. Don’t use Powerpoint’s ‘hide’ function: just rapidly flick through the slides that you don’t have time for - this creates a sensation that you could give them far far more exciting stuff if only you had more than an hour

4. Spend the first 30 minutes on your introduction - people are always more interested in introductions than in novel content

5. Even if you’ve been told your audience has little background in the area, there is likely to be one or two renowned experts in the room. Focus on the experts.  Be sure to impress them with your intricate understanding of the minutiae of the field. Don’t bore them by explaining the basics.

6. Be sure to check politely with the chair ‘How much longer do I have?’ as the 60 minute moment passes

7. Explain to the chair that you need, ‘Just five more minutes’ as the 65 minute moment passes. Your audience will be disappointed it’s only five minutes, but will be pleasantly surprised when you take longer.

8. Introduce the final set of killer experiments as the 66 minute moment passes: the audience will be delighted that you’ve saved the best material to the last

9. Have a slide saying Conclusions which isn’t the last slide. It creates exciting tension if they think you’ve finished only to find there is much, much more.

10. Spend at least 5 minutes on the Acknowledgements slide. Your audience is deeply interested in the many people whom your work depends on, and you should give their name, photograph, country of origin, role in the research, together with a quirky story illustrating their personality.

Sunday 6 March 2011

Where commercial and clinical interests collide: the case of auditory processing disorder

I’m currently writing a blogpost for the Wellcome Trust focusing on my research on auditory processing problems in children with language difficulties. While checking out links I realised there was another post I wanted to write on this topic: not on the science, but on the politics.

As I’ll explain more in the Wellcome Trust piece, auditory processing disorder (APD) is a diagnosis that is made when a child obtains a normal audiogram, i.e. demonstrates normal ability to detect sounds, yet appears not to perceive sounds normally. A common complaint is difficulty hearing speech in noise. Various experimental tests of auditory processing may show the child doesn’t appear to discriminate differences between sounds that vary in features such as pitch, duration or modulation (wobble).

APD is unusual in that there are no agreed diagnostic tests. I was pretty certain APD didn’t feature in the diagnostic bible of the American Psychiatric Association, the DSM-IV, and googling around suggests it’s not going to feature in the new DSM-5 either.   I was surprised, though, to find a mention of APD, or something very like it, in the alternative bible, the International Classification of Diseases. My searches turned up the category of “Abnormal auditory perception unspecified”, code 388.40 in ICD-9-CM. An accompanying statement on the website read: "388.40 is a billable ICD-9-CM medical code that can be used to specify a diagnosis on a reimbursement claim”. 

Given the lack of agreement on diagnostic criteria and lack of recognition in formal guidelines, it’s impossible to find sensible epidemiological data on APD. My impression, though, is that it’s a diagnosis that is quite commonly made in the USA and Australia but is much less so in the UK. A few years ago, I attended a small UK conference organised by the British Society of Audiology on APD. Many of those attending were audiologists working in the National Health Service (NHS). They wanted to update their knowledge and skills, but were apprehensive of this category, which for many of them was a new one. They were particularly concerned that scarce NHS resources might be diverted to diagnosing a condition of uncertain validity, and even more concerned at the lack of any agreed methods for treating it. The conference organisers had done their best to include a session on intervention, and had written to various American experts who were known to have developed specific approaches to APD. They did not have much joy, however. One expert explained that she didn’t give talks about her intervention, but if the organisers liked, she could run a course on it. I’d never come across this kind of thing before: for the other neurodevelopmental disorders I work on, people who have expertise in intervention will talk to other professionals about what they are doing, and be willing to present information on its rationale, methods and efficacy. Not the case here. This was closed information for which one paid money. And since there was also no published information on rationale, methods and efficacy, it was very much a case of taking it on trust. No thanks, said the organisers.

Are people in North America just less sceptical than those in the UK? The answer is no. While hunting for a mention of APD in DSM, I found a clinical policy bulletin by Aetna.They wrote a critical account of APD and its treatment, and I was pleased to see they cited a recent review by Dawes and Bishop (2009). Their overview stated: “Aetna considers any diagnostic tests or treatments for the management of auditory processing disorder (APD) (previously known as central auditory processing disorder (CAPD)) experimental and investigational because there is insufficient scientific evidence to support the validity of any diagnostic tests and the effectiveness of any treatment for APD.” Further googling revealed that Aetna is a US medical insurance company.

Putting it all together, one can’t avoid the conclusion that APD is Big Business. Not in the UK, where most of our audiologists are working for the hard-pressed National Health Service, and have no motivation to diagnose this condition. In the USA and Australia, however, audiologists in private practice have considerable incentive to diagnose APD, as they can then offer expensive treatments for it. The ICD-9-CM code opens the door to allow people to claim these expenses on medical insurance. I initially found it strange that by far the most objective and thorough analysis of APD I could find was found on the website of an insurance company, but then realised they are the ones who have an interest in being sceptical about this diagnostic category.

The sad thing about all this is that caught in the cross-fire are children whose specific difficulties may have an auditory basis. Yet none of the clinicians seems motivated to develop robust diagnostic tools, and interventions are dreamt up without adequate scientific basis or evaluation. This is a downside of a privatised healthcare system: practitioners benefit from making diagnoses but not from testing their validity.