Don’t treat obesity as physiology or physics

I have a whole bunch of draft and outline blog posts from winter and spring that I was never able to find the time to finish off. To clear them out of the way, I’ve bashed out some half-hearted conclusions, and will post them this month.

Flicking through the pile of Natures that never got read properly, ready to be rid of them, I alighted on Gary Taubes’s opinion piece: Treat obesity as physiology, not physics. Bear with me while I appear to be completely off-topic talking science for a while.

Taubes argues that:

…obesity is a hormonal, regulatory defect… it is not excess calories that cause obesity, but the quantity and quality of carbohydrates consumed. The carbohydrate content of the diet must be rectified to restore health.

Taubes set out his case that it is not useful to think of obesity as a straightforward energy in/out imbalance that causes weight gain, but that it’s in understanding that specific forms of that energy — carbohydrates, and sugars doubly so — activate our body’s own fat accumulation systems (through the well understood insulin process) where solutions lie. You’ll be familiar with it from all that “Atkins diet” and “glycaemic index” stuff: energy in the form of carbs bad; energy from dietary fat not so much.

Taubes thinks this is important stuff because:

…the overeating hypothesis has failed. In the United States, and elsewhere, obesity and diabetes rates have climbed to crisis levels… despite the ubiquity of the advice that if we want to lose fat, we have to eat less and/or move more.

There is an obvious response to this, but Taubes pre-empts it:

Yet rather than blame the advice, we have taken to blaming the individuals for not following it ‘properly’.

Suggesting that Taubes thinks that if only we change the advice from “eat less and exercise more” to his “don’t eat high glycaemic index foods”, the advice will be followed and we will then succeed in defeating obesity.

I imagine any “advice” we give will be useless, whether it’s based on physics or physiology.

Because while obesity is about physics and physiology — and psychology and genetics and half a dozen other fields of science — none of those things explain what is important: why there is more obesity now than in the past, and how to make there be less of it in the future.

The laws of physics haven’t changed in fifty years. Physiology, and the genes that underlie it, can change — but only by evolution over the course of hundreds of generations, not a few decades. Sure, our bodies have a mechanism for turning carbohydrates into fat stores. But they always have.

The focus is on the quantity of energy in and the quantity out because that is what has changed during the rise of the obesity crisis. By all means refine that to a specific focus on an excess of high-glycaemic index foodstuffs and a deficit of burning off specific sugars, but the problem that really matters remains fundamentally not one of physics or physiology but of our environment.

Taubes is right to treat those who “blame individuals for not following the advice properly” with contempt. But not because the advice is wrong. Because any “advice” — right or wrong — is going to be useless. This is not a problem that individuals have created for themselves, and it’s not a problem that individuals can be “advised” to solve for themselves. This is a problem of the environment that we live in: the types of food that are available to us, and the opportunities for an active healthy lifestyle that have been taken away from us.

Taubes later uses an analogy with smoking and lung cancer, and the analogy perfectly describes what’s wrong with the idea that obesity should be treated as a physiology problem. We know a great deal about the physiology of smoking-related lung cancer. We know how all of the many different carcinogenic chemicals within cigarette smoke flow through the lungs and pass through membranes into the cells. We know the chemical reactions that they participate in and how those reactions cause damage to the cells’ DNA. We know exactly which pieces of DNA damage result in the harmful mutations that transform them into cancer cells, driven to grow and divide. We know exactly how those mutations — to genes with names like RAS and RAF, and EGFR and a dozen others — change the shape of the proteins that those genes encode, and why that change of shape causes those proteins to misbehave. We know how these things result in the tumour evading the body’s inbuilt defences, how they hijack the blood supply to allow their expansion, and how they go on to invade and destroy neighbouring tissue and eventually escape and metastasise.

And knowing these things about physiology makes not the slightest difference to solving the smoking problem. Smoking-related lung cancer, like obesity, is a process of physiology. But it’s a problem of environment. And the most important lesson from smoking for obesity is that you can’t solve a problem environment with advice alone. Bad lifestyle choices are not an individual failing. Good lifestyle choices need an infrastructure to support them.

A double-blind trial of 20mph speed limits is an excellent idea

Association of British Drivers chairman Brian Gregory had this to say: “As with most pet road safety ideas proposed by amateur enthusiasts – speed humps, speed cameras, etc, – there is little attempt to collect scientifically sound evidence of the benefit of such ideas. No proper controlled, “double-blind” trials are undertaken…

Brian Gregory is absolutely right. Too many of our transport policies — everything from encouraging the wearing of bicycle helmets to attempting to “smooth traffic flow” in cities — are based on weak evidence and poor quality research, and many of them would benefit from well designed trials.

In particular, a proper double-blind randomised controlled trial of 20mph speed limits would be very welcome. I’m sure you can see how it would be designed.

First we take every residential neighbourhood in the country and split them randomly into “test” and “control” neighbourhoods. It’s double-blind, meaning that the boffins doing the stats should know only that there are two groups, and be “blind” to which one the test has been applied to until after they’ve finished crunching the numbers and determined which group of neighbourhoods came out best.

Then we give all of the test subjects (that is, residents, pedestrians, cyclists, children, the local economy, and anybody/thing else expected, according to the hypothesis, to benefit) the treatment — 20mph speed limits for their neighbourhood. Except that only those in the test neighbourhoods get the active ingredient; in the control neighbourhoods, they receive an inert placebo. It’s double-blind, meaning that these subjects must not know whether they are receiving the active treatment or the placebo.

The active ingredient, of course, is reduced speeds. So we have to implement the active ingredient in the test neighbourhoods while maintaining blinding amongst the test subjects. Easy enough, of course: fit to every vehicle a simple computer which recognises when it is in a test neighbourhood and limits the vehicle’s speed to 20mph in those areas. In the control neighbourhoods, the motorists should continue driving as they usually do at 35mph.

I’m sure that’s exactly what Brian Gregory must have had in mind, and it’s great to see the Association of British Drivers campaigning for this.

All those helmets posts in one place

Last year I went into some detail about why existing research into the efficacy and safety of helmets for cycling does not come close to the standard of evidence that is normally required and which we would usually demand for a medical intervention (which is what they are). Basically an application to helmets of all those things that Ben Goldacre bangs on about. But I never added an easy to link to contents page when it was complete. So here it is.

Killer Cures: a very brief introduction to why it is so important to do proper thorough research on medical interventions.

So what’s the best evidence we have on bicycle helmets?: a brief review of the research that has been performed on helmet efficacy — how it was done, what it found, and what the limitations of the methods were.

Headline figures: putting the relative risk figures reported by research papers into context of absolute risk.

What is a bicyclist?: looking at one of the major flaws in existing research on helmet efficacy — ignoring the differences between transport, leisure and sport cycling.

Would a helmet help if hit by a car?: a brief diversion into one of the side-arguments, but an important one given that most transport and leisure cycling deaths and serious injuries involve a motor vehicle.

Risk compensation and bicycle helmets: why it’s important to test for potential side-effects of interventions, some of the proposed side-effects, and why the research in this area also has too many limitations.

The BMA, the BMJ, and bicycle helmet policy: introducing the BMA’s position on helmets. Just one of several organisations with a dodgy position, but one that I think is particularly important/interesting.

How did the BMA get bicycle helmets so wrong?: a sort of conclusion piece, reiterating the importance of doing proper research on both the efficacy and the side-effects of medical interventions, and how an organisation which should know better managed to abandon this principle in favour of anecdotes for helmets.

Followed by some frivolous posts:

Can road loveliness be found in shared space?

This week, science writer Angela Saini introduced Radio 4 listeners to “shared space” in Thinking Streets.

The premise was that there is currently a “war” between the different users of streets,* that the way to create peace has puzzled policy makers for a long time, but that new research points to shared space as the solution.

The conflict on our streets is real. But I think that’s about all that is correct about the story. How to create peace is not a puzzle: policy makers know how to do it, and have known for decades. And new research doesn’t point to shared space as the answer. There’s really very little of what a scientist would recognise as research in shared space — not because streets are not something that lend themselves to the scientific method, but because, despite the importance of streets to our health, wealth and happiness, the budgets and expertise required for proper research are rarely turned to the topic.

This doesn’t mean that there isn’t a powerful group of people who have convinced themselves that shared space is the revolutionary solution to the problems with our streets. The programme was largely devoted to the now familiar routine of these shared space evangelists, but there are a number of important things missing from the evangelists’ routine — things that I think would have been interesting to hear about in the “street science” narrative.

The first thing that is missing is the full story of the wider differences between the streets of the UK and those of the evangelists’ preferred example, the Netherlands. The second is the full story of the history of risk compensation on the roads. And the third is the full story of how the UK came to be transforming streets into “shared space”.

The first story is one that readers of this blog will now be familiar with. The Dutch have a far more advanced system of roads and streets than we have in the UK. We just pour asphalt everywhere, preferably in a configuration that allows people to drive fast, sometimes put a footway on the side, and then let people drive cars and trucks anywhere and everywhere.  The Dutch, meanwhile, take care to distinguish between roads, streets and lanes, build them differently, and have clear and widely understood differences in the expected use of and behaviour on them. And they build them following the principles of “sustainable safety”: ensuring that users share space only with other users who have roughly similar kinetic energy and direction.

That last point should have been made when introducing what the programme calls “home zones” (though British “home zones” have never fully replicated the Dutch woonerven). Woonerven apply the sustainable safety principle that you only mix users who have roughly similar energy — by banning heavy vehicles, and cutting the speed of the remaining motor vehicles to a crawl. “Shared space” may share some of the superficial characteristics of woonerven, but the crucial one for making people safe and comfortable is the equality of energy.

These wider differences between the UK and the Netherlands are important. They mean that Dutch drivers already understand streets differently to British drivers. And they mean that the Dutch have a vastly different proportion of journeys made by bicycle. The demands for, purpose, effects, and success of novel street designs are therefore going to be different in the Netherlands than equivalent changes in the UK.

The second story that was missing from the programme was about risk compensation. The evangelists told the usual story to explain how shared space is supposed to work: “an environment that overtly keeps us safe makes us behave less cautiously, whereas a shared space makes us more sensible.” Motorists, the story goes, will see the unfamiliar shared space street scene, with its jumble of different users and lack of signs to tell them what to do, and their automatic response will be to slow down and pay more attention. Pedestrians and cyclists, meanwhile, will respond to the increased sense of danger and discomfort by pricking up their ears and keeping their wits about them. Risk compensation, the story goes, means that in shared space everybody will become friendly, with drivers giving way and letting pedestrians cross.

This is little more than a just-so story. Even in the Netherlands, the evidence that it actually happens this way is weak and far from scientific. In Britain, though, it can be outright contradicted by ten minutes hanging around any shared space street. Taxi drivers still speed up Exhibition Road (if there isn’t a traffic jam already blocking the street). Traffic still completely dominates the seafront at Blackpool, and the blind and disabled now stay away from it. There’s not much friendliness from the white van men at Seven Dials. “Where once you would feel crazy walking on the carriageway…,” they say of Exhibition Road. Well, observations of the scheme so far suggests that pedestrians and motorists alike will view anybody on foot who casually “shares” the carriageway — walking outside of the clear pedestrian “safe zone” — to be crazy, and will shout and blast their horns at such people.

Saini observes that in the Netherlands cars “just stop” for pedestrians trying to cross the shared space. London cabbies and commercial drivers on a deadline don’t stop for red traffic lights, let alone mere pedestrians trying to get in their way. That Dutch drivers do is less a product of the shared space environment and more to do with the fact that the Dutch recognise a fundamental difference between “roads” and “streets” and how people are expected to behave on them.

Risk compensation theory is legitimate science, but in shared space the theory is applied to explain a phenomenon that, at least in the UK, just doesn’t exit: motorists becoming more cautious and friendly. In fact, the results of risk compensation can be seen all over British streets, and risk compensation on the roads has been a powerful force shaping our behaviour, built environment, and health and wealth for almost a century. But with the exact opposite effect of that claimed for shared space.

The Rt Hon JTC Moore-Brabazon recognised the existence of risk compensation when he said, in objection to the introduction of speed limits in 1934:

“It is true that 7000 people are killed in motor accidents, but it is not always going on like that. People are getting used to the new conditions… No doubt many of the old Members of the House will recollect the number of chickens we killed in the old days. We used to come back with the radiator stuffed with feathers. It was the same with dogs. Dogs get out of the way of motor cars nowadays and you never kill one. There is education even in the lower animals. These things will right themselves.”

When people feel unsafe and uncomfortable, they stop doing whatever it is that makes them feel that way, or stop going to the places where they feel unsafe. It is entirely true that, as the programme says, “an environment that overtly keeps us safe makes us behave less cautiously, whereas a shared space makes us more sensible.” The environment that overtly keeps us safe — and which has kept us more safe with every technological innovation and toughened standard — is the interior of the motor car. In the safety of the motor car people behave without caution. The result is that everybody else feels less safe and compensates by getting out of the way. We walk less and less, bundle kids into SUVs for the school run, and most people will now never consider using a bicycle. JTC Moore-Brabazon recognised this process of risk compensation in the 1930s.

The shared space/risk compensation hypothesis is not simply a just-so story. It’s a just-so story that ignores all of our previous experience of streets. When pedestrians and cyclists felt uncomfortable and threatened by the rise of motor traffic on their streets, they compensated by getting out of the way. They went somewhere else, or swapped the bicycle for a car of their own. So when the programme says that, statistically, Dutch shared space is at least as safe as the traditional streets that it replaces (it’s probably not (p10)), far from being proof that those streets are working because “everyone becomes aware of each-other”, it is in fact just another consequence of the most vulnerable road users staying away from those streets. The increased risk of collisions and injuries on these streets is compensated for by those people who are most likely to get injured — the pedestrians that schemes like Exhibition Road are supposed to attract — staying away.

The final story that is not properly explored is why Britain is building shared space streets and other “road loveliness”, as the programme puts it, such as the scramble crossing at Oxford Circus. The programme’s only comment on this was that we are now designing places for people instead of merely designing places for cars. In fact, designing successful places for people has been going on for a long time. To create them, you first get rid of cars. In his 1995 Reith Lecture, “The Sustainable City”, Richard Rogers described all the opportunities that came from removing motor vehicles from places, and listed some of the top priority places in London that needed the treatment. The terrace in front of the National Gallery on Trafalgar Square was one, and this was implemented in 2003, creating a largely pedestrianised zone between Trafalgar and Leicester Squares. One side of Parliament Square almost followed, but the plans were cancelled when Boris came to City Hall, and our politicians now seem determined to keep Parliament Square as an isolated and desolate traffic island forever. A riverside park in place of the Embankment road from Parliament Square to Blackfriars Bridge was the most radical of the suggestions, and the one that politicians wanted least to do with. And then there was Exhibition Road:

Albertopolis – the collection of major museums and universities in South Kensington, including the Albert Hall and the Victoria and Albert Museum – could be connected across the road into Hyde Park. Exhibition Road could become a pedestrianised millennium avenue, part of a network of tree-lined routes.

Exhibition Road has been transformed because the case for transformation was overwhelming. The need to make a more attractive environment and the opportunities and benefits of a place for people were obvious. But shared space is a miserable compromise. Shared space on Exhibition Road is not an alternative to the old four-lane highway layout, a layout that everybody already agreed could not be allowed to stay on such an important street. It is an alternative to the much needed and long called for removal of the motor vehicles, which will now continue to dominate the space, continue to separate Albert, perched in the park, from Albertopolis, and continue to choke South Kensington with pollution.

Far from being a case of people reclaiming the streets from cars, Exhibition Road and Oxford Circus are examples of places where traffic has succeeded in clinging on to its ownership and dominance of streets that so obviously needed to be properly reclaimed. None of the great economic and cultural opportunities that Richard Rogers described have been enabled by the changes. No modal shift, no health or environmental benefits will result from them. It was built — for £30 million — but they won’t come for fancy paving alone.

(I don’t think the programme makers can be blamed for failing to discuss these points — the fault lies with the shared space True Believers. Shared space is currently very trendy in a field that doesn’t have much experience with scientific skepticism. There are a lot of people who desperately want it to work and so have convinced themselves that it must work — as one person tweeted, if Jeremy Clarkson is a critic, it must be a good thing. Steve Melia is one of the few academics to have tried to introduce some of the much needed scientific skepticism — and I imagine the publication of his paper came too late for the programme.)

Shared space is the topic of next week’s Street Talk: Stuart Reid, Director of Sustainable Transport and Communities, MVA Consultancy, will talk about Creating successful shared space streets, followed by a chance to raise questions. As usual it’s upstairs at The Yorkshire Grey, 2 Theobalds Road, WC1X 8PN at 7pm (bar open 6pm) on Tuesday 10th January.

* illustrated by broadcasting sound bites that included the sort of massacre fantasies that would, with any other kind of weapon, result in arrest, but for some reason never does when the weapon is a motor vehicle.

Won’t somebody please think of the children?

In December 2005, an article of massive importance was published in the British Medical Journal. Doctors counted up the number of children being admitted to A&E with musculoskeletal injuries (breaks and sprains — many of which would have been caused by bicycle-related incidents) on summer weekends  and discovered a startling pattern. A new preventative intervention was discovered.  They authors say:

The figure shows the weekend attendance to our emergency department in June and July between 2003 and 2005. The mean attendance rate for children aged 7-15 years during the control weekends was 67.4 (SD 10.4). For the two intervention weekends the attendance rates were 36 and 37 (mean 36.5, SD 0.7). This represents a significant decrease in attendances on the intervention weekends, as both are greater than two SD from the mean control attendance rate and an unpaired t test gives a t value of 14.2 (P < 0.0001). At no other point during the three year surveillance period was attendance that low. MetOffice data suggested no confounding effect of weather conditions.

From this data on the effect of Harry Potter books on injury rate it should be blindingly obvious that countless lives would be saved if legislation made Harry Potter books compulsory — for children at the very least (we can perhaps allow adults the freedom to choose to turn themselves into dribbling brain damaged wrecks by not reading Harry Potter).

Anybody who cycles while not reading Harry Potter clearly deserves to have their brains smeared across the road. They lack any credibility.

Gwilym, S. (2005). Harry Potter casts a spell on accident prone children. BMJ, 331:1505-1506 doi:10.1136/bmj.331.7531.1505

Further reading: ‘Tis the season, from Language Log.

Appendix: Bad Science Bingo in the BMA’s “safe cycling” pages

This is just a crude brain dump of a post that comes after the serious series — posts one, two, three, four, five, six, seven and eight.

Sorry, I just can’t get over these extraordinary pages on the BMA’s website. Here’s a very quick run through some of the Bad Science Bingo points that leaped out.

There were the canards, fallacies, and methods of misdirection:

  1. Obviously there’s the emphasis on anecdotes and cases, the lowest form of evidence, which are essentially appeals to emotion.
  2. Coupled with that the description of the “beliefs” of a few doctors, designed to nudge readers into conformity (acting as a subtle argument from authority for readers who are not doctors, and an argumentum ad populum for those who are).
  3. Specifically in a couple of the anecdotes the selective recall of serious injuries in non-helmet wearers and minor injuries in helmet wearers (creating the illusion of control).
  4. “Figures from New Zealand show that in 2006 there were 883 cyclists injured and nine killed. This corresponds to 20 people per 100,000 injured and 0.2 people per 100,000 killed. These figures are lower than those reported for 1994 when legislation was first introduced.” Fun factoids, but they don’t actually say anything about helmet efficacy. Lots of things changed between 1994 and 2006. (Post hoc, etc.) Perhaps there is evidence for NZ’s legislation improving safety but the 2006 crude injury statistics aren’t it.
  5. Incidentally, while we’re on correlation and causation, the authors even get their statements on cycle tracks subtly wrong: “During the period of 1976 to 1995 Germany almost tripled their mass of cycle networks and this led to a 64 per cent drop in cyclist deaths.” While the evidence of a causative link is much stronger here, it’s a lot more complicated that a simple one “led to” the other. The reference does indeed state that Germany tripled their cycle network and that their death rate fell, but it notes that the later is in part the result — directly and indirectly — of the former.
  6. I loved this statement, when discussing the side-effect of reduced rates of cycling: “If legislation were to reduce the rates of serious injury and promote increased public confidence in cycling, the effect might be to make cycling more popular. Clearly, there is a need for further research on this matter.” I don’t know where to begin. After dismissing all the side-effects of helmets as being based on too weak and preliminary evidence, the BMA counter it all with a speculation based on none at all — and tell us that there is a clear need for more research. Well quite.

And there were specific claims or activities that run counter to the cited evidence, or subtly misrepresented it (I did not systematically check references, these are simply things that leaped out as contradicting what I recall of the literature):

  1. On page 2 the BMA list the things they are doing in addition to promoting helmets. The first item is “publicity and education campaigns in order to raise drivers’ awareness of more vulnerable road-users, including cyclists”. We know that these don’t really work.
  2. The “risk compensation” section on the fifth page cites just one source, the Spanish study described on Monday, whose study design we know can not answer the question that they are asking it to answer.
  3. “As noted in Table 2 the Macpherson and Spinks 2007 Cochrane review found no evidence to either support or counter the possibility that legislation may lead to negative societal and health impacts such as reductions in cycling participation.” You would probably read this and think, “studies have been done and they found no evidence for X.” It actually means, “the studies didn’t bother looking at X.”

And there were fun inconsistencies:

  1. Kirsty’s story on page 5, “Doctors believe that had she not been wearing a cycle helmet at the time of her crash, she would have died,” and on page 6, “They have been shown to reduce the risk of head injury and its severity should it occur. This does not apply to fatal crashes but in such instances the force of impact is considered to be so significant that most protection would fail.”

The resource is just generally bizarre. It has a very weird set of focusses. On one page it gives a seemingly arbitrary selection of factoids from cyclist demographics (notably absent is any acknowledgement that “cycling” is not a single activity); on another it notes the diversity of cycle helmet standards — but fails to discuss any of the important consequences of this, such as how few helmets these days meet the stricter standards that applied in the past, back when most of the evidence on helmet efficacy was collected. In a table on the fifth page they mention that a study found no evidence of helmets causing or exacerbating rotational injuries — yet this is the only mention they make of the rotational injuries problem. Their inclusion and omission criteria appears to be completely random.

Anyway, enough of this. I don’t want to hog the game — your turn.

How did the BMA get bicycle helmets so wrong?

In 1958, the UK licensed a drug for treating morning sickness. It worked very well. The studies all showed that pregnant women suffering from morning sickness received much relief with the drug. Three years later it was withdrawn, but not before 2,000 babies were born with birth defects — 20,000 worldwide — three quarters of whom would die in infancy. The drug was, of course, thalidomide. It managed to get licensed because too many of the people studying it were focused on very specific aspects of its activity on the disease states that it was thought to treat, and too few were stepping back and looking at the big picture. It prevented morning sickness, therefore it worked — the logic of the day.

Joe’s anecdatum: In 2003, Joe, an 18 year old male, slipped on some wet stairs in a block of flats. His head fell eight feet onto the concrete floor. He was not wearing a bicycle helmet. He had a headache for the rest of the evening. He has never been diagnosed with any long-term ill-effects.

A disaster on the scale of Thalidomide can’t happen these days because the path to drug licensing forces researchers to comprehensively check all effects and outcomes of a new drug. Individual researchers will know in extravagant detail very narrow aspects of how a new drug achieves its desired effect. Some of them will know the exact rate at which it crosses the various barriers into the blood and into organs; others will know the exact chain of activation of molecules and genes within cells, down the individual amino acid residues that are modified and the exact number of seconds after the drug is administered; others will know the exact schedule and mechanism by which the drug is broken down or expelled from the body. They’ll be really excited and enthusiastic about their new drug. But when somebody steps back and points out that the drug causes heart failure, it won’t get anywhere.

But the BMA seems to forget everything it knows about testing interventions when it comes to bicycle helmets. There are some superficial differences between helmets and what we normally think of as “medical intervention”. They are a physical intervention rather than a drug — but medicine deals with and properly tests physical interventions all the time. And it’s supposed to prevent rather than treat injuries — but medicine deals with and properly tests preventative measures, including conventional drugs, all the time. There is no intrinsic reason why bicycle helmets can not be tested properly, in line with the rules that were designed to prevent another thalidomide disaster. We have the methods and the expertise.

Joe’s anecdatum: In 2009, Joe, a 23 year old male, slid on the gravel on the Greenwich Peninsula Thames Path, hitting his head on the concrete path and writing off an £800 camera lens. He was not wearing a cycle helmet. He was unhappy and was bored for several hours waiting for Lewisham Hospital to glue his face back together. He stayed home all next day. He has never been diagnosed with any long-term effects.

And yet the evidence that we have on bicycle helmets is currently in a worse state than the evidence that got thalidomide licensed. There is some (limited) evidence that in people who have had crashes, helmets reduce the rate of specific types of head injury — just as there is undisputed evidence that thalidomide is effective in relieving morning sickness. But there is also (equally limited and disputed) evidence of several different side effects — an increase in other types of injury* and an increased rate of crashes (particularly crashes with vehicles, which are more likely to have negative outcomes). And there is also evidence that helmets discourage many people from cycling* — an activity that adds many quality years to people’s lives by preventing or delaying cardiovascular disease, cancers, diabetes, depression, dementia, and all those other diseases of sedentary lifestyles. Helmets might be an effective intervention for the types of injuries they are claimed to prevent, but that would be irrelevant if, like thalidomide, they cause more problems than they solve.

Joe’s anecdata: In 1991, Joe, a 6 year old male, on separate occasions smashed his head open a door, some concrete steps, and a glass coffee table. On no occasion was he wearing a cycle helmet. He has a scar on his forehead that is almost identical to James Murdoch’s. Unlike James Murdoch, he has never been diagnosed with any other long-term impairment or ill-effects.

I’m not saying that they do. The issue is not that there is overwhelming evidence against helmets. The evidence that they are the cause of crashes and other injuries is no stronger than the evidence that they prevent head injuries. The issue is that the evidence either way is nowhere near good enough to make a recommendation. If helmets were a drug, they would be nowhere close to getting licensed right now.

Which is why British doctors should be embarrassed that the British Medical Association currently lobbies for helmets to be compulsory when riding a bicycle. Imagine if a pharmaceutical company developed a drug which, if administered before receiving a specific kind of traumatic injury, makes that injury easier to treat. Imagine doctors and medical scientists lobbying for it to be compulsory for everybody to take this drug daily, without anybody ever having checked for side-effects.

How has this situation arisen?  The policy decision has largely been made on the insistence of A&E consultants and trauma surgeons.  Consider the anonymous quotations that are scattered through the BMA’s cycling pages:

‘I have seen – in my practice and when working in A/E – quite a number of serious head injuries from children falling off bicycles. I have also seen a number of children who wore helmets who only suffered minor injury. I am convinced that helmets reduce injury.’ — GP

’I would certainly support cycle helmet wearing for cyclists. I have seen far too many young lives ruined by head injuries.’  — Consultant in Emergency Medicine

’I am an Emergency Department Consultant and a keen cyclist. I wholly agree…that we need to move to an environment where cycle helmet wearing is the norm, rather than the exception’  — Emergency Department Consultant

’As a regular commuting cyclist through twelve miles of heavy London traffic and as a Consultant Emergency Physician I whole-heartedly support the BMA’s stance on the introduction of legislation to make the wearing of helmets mandatory.’  — Consultant and Honorary Senior Clinical Lecturer in Emergency Medicine

’Over the [last] 16 years I have worked in A/E. I have dealt with hundreds of head and facial injuries, particularly in children, that could have been avoided had a cycle helmet been worn. I have also had the misfortune to deal with a number of fatalities that I believe would have been avoided by simply wearing a helmet. I firmly believe that legislation making cycle helmet usage mandatory is essential.’  — Emergency Medicine Consultant and Clinical Director

‘I have worked in emergency medicine for the last twelve years. Personally I cycle around two and a half thousand miles each year and my family are rapidly becoming keen cyclists also. Prior to working in emergency medicine, I did not routinely wear a cycle helmet.

I have seen numerous examples of patients sustaining severe head injuries from which they will never recover whilst cycling at low speed without a helmet. I have never seen this pattern of pathology in cyclists wearing helmets under these circumstances.

I am aware of the recent Cochrane review on the subject. I firmly believe that all cyclists should wear helmets. I also believe that the only way to ensure this happens is through legislation. I can see no justification for allowing this entirety predictable pattern of head injuries to persist. I strongly support the BMA position…’  — Consultant in Emergency Medicine

That’s five emergency medics and a GP, all reciting anecdotes from A&E. Nobody who specialises in, say, public health.

Emergency medics and trauma surgeons are obviously very enthusiastic about the potential to put an end to injuries, just as people who were very focused on the problem of morning sickness were excited by thalidomide. But ironically, most doctors and scientists are not very good with complexity. They are good with the intense detail of their own specialism, but when they have a problem to solve they fail to consider that there might be relevant things happening outside of their own field. When emergency medics want to solve the problem of head and brain injury, they look at those injuries in isolation from the rest of medicine. It’s not their job to think about the bigger the picture, or worry about things like side-effects.

Indeed, dare I suggest that for most working emergency medics and GPs, the science of evidence-based medicine is not their job or even a major part of their training: they only need to practice what the scientists amongst them tell them to practice; most working doctors don’t need to understand how we know their interventions work.

Which is fine. But that stuff is somebody‘s job, and somebody isn’t doing it right at the BMA.

This way of thinking about the issue — as an isolated problem of emergency medicine — is reflected all through the BMA’s bizarre “safe cycling” pages, which emphasise these individual anecdotes and opinions of doctors in that field (despite “expert opinion” being frequently out of line with the science and despite everything we know about the ability of anecdotes to lead readers astray), while failing to ever think of the issues around helmets in terms of effects and side-effects or the usual path of research that is demanded for medical interventions.

The authors of the Cochrane review on bicycle helmets say, in dismissing risk compensation, “the fundamental issue is whether or not when bicycle riders crash and hit their heads they are benefited by wearing a helmet.” This is exactly analogous to saying that “the fundamental issue is whether or not when a pregnant woman has morning sickness her symptoms are relieved by thalidomide.” That is not the fundamental issue at all. The fundamental issue with any medical intervention is whether it does more help than harm, whether it improves the length and quality of our lives, whether we are better with it or without. That the authors of a Cochrane review are allowed to get away with saying otherwise is a great failure for evidence-based medicine. That the BMA think there is sufficient grounds not merely to promote this intervention but to enforce it is an epic failure.

* I thought about posting separately on these sets of side-effects too, but those posts would have been much like the rest of this series: there’s a plausible hypothesis, there’s some evidence to support it, but the evidence has limitations. Ultimately the conclusions always are: the evidence base is nowhere near good enough to support helmet promotion, let alone legislation.

The BMA, the BMJ, and bicycle helmet policy

The reason I pick up the bicycle helmet theme again this week is that the BMJ is running a sidebar poll of their readers (or, more accurately, of cycling tweeters and recipients of Robert Davis’s emails ;-)), asking whether it should be compulsory for adult cyclists to wear helmets.

The BMJ is the journal of the British Medical Association, the professional association and trade union of British doctors. Part of the BMA’s remit it to lobby the government on issues that its members believe are important, and it has some clout in this area. These policies are decided by a representative democracy — a group of members elected by region and by field. In recent years, this body has decided that it is BMA policy to support legislation that would make helmets compulsory for cyclists.

Doctors might not even have noticed the adoption of this policy.  To most it is probably an irrelevance — most people will not cycle in the conditions that prevail in this country and doctors are no exception. And I imagine that very few have read the quite astonishing “promoting safe cycling” pages of the BMA website. Readers of Ben Goldacre should get their Bad Science Bingo cards out before clicking the link.

Tomorrow I’ll dissect those pages and ask how they came to be so bad. But there is a more basic issue here. Never mind whether helmets are effective or not, aren’t there more important policies that the BMA should be pursuing?

In 2002, the BMJ polled readers about issues of health and road danger — a slightly more scientific and insightful survey than the free-for-all yes/no question that they ask this week, and one much better targeted to British doctors rather than every joker on the internet.  They asked readers to judge the importance, on a scale of 1 to 6, of various interventions for promoting the health and safety of pedestrians and cyclists. Helmets came out bottom of the doctors’ priority list:

Average ranking Response
3.25 More and better cycle safety training
2.87 Compulsory cycle helmet wearing
3.42 Separate lanes for bicycles in urban areas
4.04 Traffic calming to reduce vehicle speeds in urban areas
4.04 Reduce car use by better public transport and by encouraging walking and cycling
3.85 Banning motorised vehicles from towns and cities

Interestingly, helmets for cyclists was ranked as only a slightly more sensible solution than helmets for pedestrians. Indeed, the results for pedestrians look much like the results for cyclists.

It’s the most heartening thing I’ve read in a long time. Most doctors get it. They’re not ignoring the bull. Certainly all of the public health doctors and epidemiologists (the people with the most exposure to scientific methods, incidentally) that I know get it. The problem is not that cyclists are taking insufficient measures to protect themselves from danger, it is that they are put in danger by motorists and by the government policies and societal norms that support the mixing of fast-moving motor vehicles, including those driven by people known to be dangerous and incompetent, with cyclists and pedestrians in our towns and cities.

Alongside their policy of lobbying for legislation to compel the use of helmets, the BMA has drawn up a set of recommendations for motor-vehicle reduction. But while the former policy is actively being pursued in Westminster and in the nations, the latter looks to have fallen by the wayside, and is still stuck in 1997. Why?

Risk compensation and bicycle helmets

Some months ago I left a series on bicycle helmets hanging while I got distracted with other things. We had looked at what the best evidence for the efficacy of helmets in preventing injury in the event of a crash is, and some of the reasons why we should be cautious about that evidence. We found that if you’re unlucky enough to have been hospitalised while riding a bicycle, you’re less likely to be there with a head or brain injury if you were wearing a helmet at the time of the crash. We noted several ways in which this protective effect is exaggerated and used to mislead, we noted that reduction in injury is from a very low level anyway, and that the research so far done fails to provide any sub-analysis of very different riding styles, such as racing cyclists, mountain bikers, and utility cyclists.

We also made careful note of the fact that a reduction in the rate of head injury in the event of a crash is a different finding to a reduction in the rate of injury and death of bicyclists. We briefly began the exploration of what this means by considering the fact that helmets are not much defence against a motor vehicle.

How could a reduction in head injury in cyclists who crash not mean a reduction in injury and death in bicyclists? Well, helmets could be causing other kinds of injury in crashes. Or they could be causing crashes — particularly the worst kinds of crashes.

The latter is a particularly interesting avenue. The idea is risk compensation or risk homeostasis, a phenomenon documented in fine detail by John Adams in the 1985 book Risk and Freedom. Adams showed that advances in road safety — seatbelts, motorcycle helmets, safer vehicle designs and wider, straighter, safer road designs — are never followed by quite the cut in injuries and deaths that is predicted; that while road “safety” has improved crashes are no less frequent, and that bystanders — pedestrians and cyclists — are butchered at an ever increasing rate.  There is a set level of danger that people are willing to tolerate, and so motorists were compensating for the new safety features by driving faster and taking more risks. To put it in Adams’s technical terms, potential “safety benefits” were instead absorbed as “performance benefits”.

James Hedlund reviewed the evidence on risk compensation and came up with a set of rules for when people are likely to compensate for a safety intervention:

  1. They know it’s there.
  2. They know it’s a safety feature.
  3. There is a potential performance benefit to be had.
  4. There is freedom to realise that performance benefit.

Well cyclists know whether or not they’re wearing a helmet, they know that helmets are meant for safety, there are potential performance benefits — riding faster, through smaller gaps, in more hostile traffic, or with less caution in conditions that would otherwise advise it — and cyclists are generally free to ride more furiously if they want to. (Indeed, you may be wanting to cycle faster, in which case go ahead and use a safety feature as a performance benefit if that works for you.)

But that’s only a hypothetical reason to expect risk compensation by cyclists wearing helmets, not evidence that it actually happens. And very little effort seems to have been put into researching that — perhaps because it’s difficult to devise a properly controlled test. A study of cyclists in Spain attempted to test the idea by comparing the rate of helmet wearing in traffic law violators to the rate in non-violators, finding that law breakers were less likely to be helmet wearers, the opposite to what they say should be expected if there is risk compensation. However, this study could not control for all possible differences between the populations (“confounding variables”) — for example, helmet wearers are already a population of safety-conscious conformists, less likely to commit traffic violations, and so a better question to ask would be whether those helmet wearers acted even more cautiously when their helmets were taken away from them, and whether the non-wearers behaved even more recklessly when given a helmet. (This study is, embarrassingly, the British Medical Association’s sole reference for their dismissal of risk compensation.) A more recent study observed a set of participants behaviour with and without a helmet, using speed as an indicator of risk taking and heart rate variability as a proxy for risk perception. This study found that when helmet users had their helmet taken away, the risk taking (i.e. speed) reduced to keep the risk perception stable. However, the study only looked at 35 people, and only looked at proxy variables. Neither study is very convincing — the limitations I describe here are just the tips of the icebergs — and certainly nowhere near strong enough or specific enough to guide policy. We still have a mere plausible hypothesis with no good evidence as to whether or not it’s true.

The authors of the Cochrane review acknowledge the suggestion that risk compensation by cyclists could affect their crash rate, but believe that is unlikely. It’s interesting to see a hypothesis dismissed with the argument from personal incredulity in a Cochrane review.

What is not touched on in the review, and which is potentially far more important (given the fact that crashes with motor vehicles are more likely to kill or seriously injure), is the risk compensation effect not of cyclists themselves but of the other road users around them — i.e., of the motorists. Look again at Hedlund’s rules. Motorists can see whether a cyclist is wearing a helmet; they know that helmets are supposed to be a safety feature; they can potentially find performance benefits — they think they can squeeze through tighter gaps when overtaking against oncoming traffic, or pass more quickly, or shoot in front while turning, because if they hit the cyclist then no harm is done; and there is nothing to stop them realising that performance benefit, since the police, if there even are any, are rarely even aware of the relevant traffic rules, let alone bothered with enforcing them. There is therefore a plausible hypothesis that motorists will take more risks around cyclists who wear helmets than around cyclists who do not.

This hypothesis is made all the more plausible by the fact that, in addition to potentially making cyclists seem less vulnerable, helmets make cyclists look more competent: in surveys of motorists’ beliefs, most assume that cyclists who wear helmets are more experienced and more “responsible“, meaning that they may be driving more carefully around non-helmeted cyclists who they expect to do something silly. And motorists overwhelmingly think that cyclists should be forced to wear helmets — presumably so that the motorists can get the performance benefits of driving more dangerously around them.

The motorist risk compensation theory has famously been tested by @IanWalker in one of the most delightful experiments in the field. Walker rode around Salisbury and Bristol on a bicycle fitted with an ultrasonic distance sensor measuring the effect of a number of variables on passing distance, including rider position in road, type of motor vehicle, and whether he was wearing a helmet. Analysis of over 2,000 passes showed that motorists tended to give on average around 5-10 cm less space when the rider wore a helmet. It’s not much difference, and the effect of motor vehicle type, perceived rider gender, and rider’s distance from the edge of the road were all stronger.

But it’s important to note that there is always a distribution of passing distances — a bell curve. There are a few motorists who give a lot of room, a few who scrape past, and a lot clustered in the middle, giving a little over a metre distance. When wearing a helmet, the bell curve shifts in a little bit. The cautious drivers give a little less space, the average drivers give a little less space, and the dangerous drivers give a little less space.  It’s the latter who are now more likely to drive into you.

Walker’s research, delightful as it is, is itself not without limitations. Most important amongst them is that, when it comes to answering questions of cyclist safety, it suffers the same limitation of measuring only proxy variables: passing distances rather than actual risk of crashes and injuries. But it tells us that there is a very important reason to study more than just the isolated risk of head and brain injury in the event of a crash.

Helmets are a medical intervention, exactly like a drug or surgical procedure. They are a preventative intervention and they are a physical intervention, but neither of those are alien to medicine and to the modern methods of evidence-based medical science. And risk compensation is just a side-effect of this medical intervention, like the side-effects of drugs. The side-effects of drugs that make it to market are by definition outweighed by the beneficial effects; but ten times as many drugs are discarded during development because the research finds that either the side-effects are so big or the beneficial effects are so small that the harm outweighs the help.

The authors of the Cochrane review defend their dismissal of risk compensation by saying “the fundamental issue is whether or not when bicycle riders crash and hit their heads they are benefited by wearing a helmet.” And that’s fine if you’re in the preliminary stages of developing an intervention and you are so far only concerned with whether it has beneficial effects. But the authors go far beyond that early stage in their conclusions, recommending that this intervention be compulsory — despite there being very good reasons to suspect that there are potentially major side-effects of this intervention. They can’t have it both ways. If you haven’t bothered studying the side-effects you can’t license the drug. It might kill people.

Can drivers be taught a lesson?

M’coblogger Ed thinks there is a case for teaching drivers to behave — specifically by appeals to patriotism. Education programmes are a popular idea amongst cyclists, cash-strapped councils, and road safety types. I dismissed them as a solution that doesn’t work in my own post on revenge and road danger, but didn’t go into any detail. So I thought I better ask: what’s the best evidence we have about driver education programmes?

Remember what I said about bicycle helmets. It may be common sense that teaching drivers will make roads safer and nicer places to be, but common sense is frequently wrong, and cures can kill if they’re based on common sense rather than evidence. Trying to educate drivers could make the roads safer and nicer. It could be entirely ineffective. Or it could make them more dangerous and less pleasant. Until we conduct a controlled trial, we don’t know which.

There are two systematic reviews from the Cochrane Collaboration looking at the effectiveness of driver education programmes.  Cochrane reviews are, remember, the independent synthesis of everything that we know about a particular intervention, and are considered by doctors to be the closest thing we can ever get to fact.

The first Cochrane Review looks at the effectiveness of driver education in existing drivers. The schemes that have been trialled particularly focus on advanced driver training — the sort of programme that is designed to improve hazard detection and reduce error making, and which is frequently recommended for professional drivers — and on the remedial programmes that are increasingly offered to drivers who break the rules as an alternative to a driving ban.  These are lessons and lectures rather than marketing campaigns, but the remedial programmes — lectures on why speed limits matter — are particularly relevant to the “be nice” approach to making our streets nicer places where people feel able to ride bicycles.

The review found 24 trials from 1962 to 2002, all in the US except for one in Sweden, with more than 300,000 participants between them.  With those sorts of numbers, there is little chance of the review accidentally getting a false result.  Four were for advanced driving courses, the rest for remedial classes.  The programmes ranged from the simple supply of written material (9 trials) — a letter and copy of the rule book — through group lectures (16 trials) to proper one-on-one classes (7 trials), but all were designed to improve “driver performance and safety”.

The trials typically checked up on participants two years later and compared the rate of rule breaking and/or the rate of crashes in those who received the education programme and the controls who did not.  There was no difference. The education programmes didn’t stop drivers breaking the law or having crashes.  The authors concluded that companies shouldn’t bother with driving courses for their staff, but should let them take the train instead.

The evidence reviewed isn’t perfect. They could not, for example, blind participants as to whether they were in the study or control group. And the conclusions apply to the 32 specific advance driving courses and remedial classes that were trialled — we can not say for sure that other types of education campaign wouldn’t work. But the evidence tells us to at least be very wary of investing in any campaign strategy that relies on teaching people to play nice.

The second Cochrane review looks at the effectiveness of educating school kids before they start driving.  These are the sort of programmes that are supposed to address the fact that 17-21 year old drivers are twice as likely to crash as the average driver. They are particularly popular with the Road Safety industry and there are several varieties common in this country.  Indeed, I have first hand experience: it must have been during the final GCSE year, aged 15 or 16, that we were all taken to the Bovington tank training circuit to take it in turns driving hatchbacks (sadly no tanks) around the track, doing hill starts, three point turns, reverse parking, and, as a treat afterwards, emergency stops from 70mph. While not everybody is privileged enough to get real practical lessons, the government does at least make sure that kids are taught how to get a learner’s license and find an instructor, what tests they will need to take, and are given a few road safety messages.¹ *

The Cochrane review found three RCTs with a total of around 18,000 students. The review looked at the public health outcome of the trials, typically measured as the rate of crashes and/or violations in the first few years of holding a license. Giving school kids driving education did not reduce the incidence of crashes and violations.

Indeed, the authors, against common sense, found evidence of the opposite. The reason can be found in the other outcome that the trials measured: the time it took the kids from turning 17 (or whatever age was relevant in their particular locality) to passing their driving test (which the study gives the awful name “license delay”). Kids who were given driving classes at school were more likely to seek and obtain a license, and they did so earlier — and we already know that age correlates with crash rate and rule breaking (or at the very least, being caught and punished for rule breaking).  Driving classes in school weren’t making people drive safely, but they were making people drive.

You can see why driver education programmes are so popular with the road safety industry, puppet of the motoring lobby. The trials reviewed by Cochrane were all from the mid 1980s, yet we continue to put money and effort into programmes that are worse than useless. My own school driving lesson was fifteen years after school driving lessons were shown to be harmful to our health.

Whenever questioned, the government cites as justification its own non-controlled study which showed that kids are able to recall and are vaguely more likely to agree with specific road safety messages when asked three months after the lessons. No, really. That’s it.¹

So drivers can be taught. They can be taught, before they even become drivers, that driving is normal, just something that everybody does. The moment I turned 17 I wasted about a hundred quid on driving lessons before I stopped to ask myself why. Everybody was doing it, right? You do GCSEs at 16, driving at 17, ‘A’-levels at 18. That’s how it works.

Perhaps they can be taught to behave and we just haven’t worked out how yet. There are not, so far as I am aware, any trials on the effectiveness of making motorists try cycling on the roads. But I suspect even that would have limited effect, and maybe even that could backfire too.

Because people generally don’t do what they’re told to do, they do whatever looks normal and natural and easy. You can call that selfish and lazy if you like, but I don’t think that will help you understand or overcome the behaviour. In the UK it is normal and natural and easy to learn to drive and then drive badly. And people refuse to be taught that the things which are normal and natural and easy, the things that everybody around them is doing, are wrong. Experience trumps the word of others.

In the Netherlands, incidentally, cycling is normal and natural and, thanks to the infrastructure, easy. In the UK it’s none of those things. Make it easy and you’re nine tenths of the way to making it normal and natural.

Continue reading “Can drivers be taught a lesson?”

That’s not what I said, say scientists

According to SCIENTISTS, “pollution is not improved by c-charge.”  (“Improved”? These scientists are so sloppy with their language.)

Journalists all over the city are this week reporting that the congestion charge has not reduced air pollution problems in central London, and that’s a fact, proven by science.  (As far as I know, the CCharge was never about air pollution — the clue’s in the name. But it’s potentially an interesting thing to look at all the same.  I can invent in my head plausible hypotheses for why it would improve air quality, and why it wouldn’t, but both would be useless without evidence either way.)

Unfortunately, I’m having a little trouble finding out who these so-called scientists quoted as the source for the claim are.  I asked scientists on twitter, but they couldn’t remember making the statement.

What I can easily find is a set of documents (none of them making the claim) reviewing work that explores a potential link between the CCharge and air pollution.  The documents are not new research published as peer reviewed articles in a scientific journal.  They are a “research report” — a King’s College academic’s review of what we know about the CCharge and air pollution — coupled with commentary and a press release.  The documents are all commissioned and published by the “Health Effects Institute“,

a nonprofit corporation chartered in 1980 as an independent research organization to provide high-quality, impartial, and relevant science on the health effects of air pollution. Typically, HEI receives half of its core funds from the US Environmental Protection Agency and half from the worldwide motor vehicle industry.

And that’s fine.  If the content is good, it doesn’t matter who funded it or where it was published.  I’m merely establishing exactly who is saying what.  The exact people are:

  • Professor Frank Kelly, an environmental health researcher specialising in air pollution, who (as leader of an independent group of scientists) wrote the comprehensive research report reviewing the evidence.
  • HEI’s Health Review Committee, who wrote a short commentary on Kelly’s research report.
  • HEI’s press office, who wrote the press release, which is the only thing that most journalists read.

The main line of research reviewed by Kelly looked at roadside and background levels of nitrogen oxides (NOx), carbon monoxide (CO) and small particulates (PM10).  The data compared the change (if any) in these pollutants at locations within the CCharge zone from a few years before implementation to a few years after implementation.  It did the same for control locations in London but outside of the CCharge zone, to account for any unrelated trends in air pollution.

Kelly’s report concluded that there was no evidence of a CCharge effect on roadside levels of NOx; a complicated effect on background levels of NOx (whereby one type was marginally reduced and another type increased, especially near the boundary of the zone); but a marginal reduction in carbon monoxide and a reduction in particulates becoming more pronounced the closer one gets to the CCharge zone.  So the overall conclusion is that there is a small amount of evidence to indicate that the CCharge has made a small reduction to air pollution (the exact opposite of the claim attributed to “scientists” in the headlines).  However, the data was extremely limited — in some cases to single data points — and Kelly’s report doesn’t put much weight on any of the conclusions.

Even where there is sufficient data, Kelly’s report indicates that there are limitations to what this kind of data can say about the CCharge effects.  The CCharge zone is very small, he points out, and our atmosphere somewhat fluid: the air in London blows around and mixes, so even with sufficient data, this study design is not an optimal way to answer questions about the CCharge.* **

All of these limitations in study design and data quantity are reflected in the Health Review Committee’s short commentary on the report:

Ultimately, the Review Committee concluded that the investigators, despite their considerable effort to study the impact of the London CCS, were unable to demonstrate a clear effect of the CCS either on individual air pollutant concentrations or on the oxidative potential of PM10. The investigators’ conclusion that the primary and exploratory analyses collectively indicate a weak effect of the CCS on air quality should be viewed cautiously. The results were not always consistent and the uncertainties surrounding them were not always clearly presented, making it difficult to reach definitive conclusions.

Which is to say: the research so far isn’t really capable of answering any questions satisfactorily.  While the evidence is for a small improvement in air quality thanks to the CCharge, none of the evidence is very good.  They go on to make the academic’s favourite conclusion: more research is necessary.

That’s right, this is a 121 page research review with associated commentary which simply concludes that the existing data is insufficient to tell us anything useful at all.  That’s no criticism of Kelly or the HEI.  They set out to review the evidence; the evidence just happens to be severely limited.

The Health Effects Institute decided to press release this.  “Study finds little evidence of air quality improvements from London congestion charging scheme,” the press release screams in bold caps.  “Pollution not improved by C-Charge,” says Londonist. Can you spot the difference between the HEI press release and the Londonist headline?

There is an old saying that absence of evidence is not evidence of absence.***  It’s a classic source of bad science and bad journalism, and in this case it nicely sums up what is wrong with the Londonist piece.  A review which actually found very weak evidence that the CCharge improved air quality is covered as a study which found hard proof of the exact opposite.

* Indeed, Boris Johnson would like to blame all of the city’s problems on clouds blowing in from the continent rather than the motor vehicles that account for most of it.

** I could add to this limitation the fact that the CCharge was not merely meant to cut car use within the zone: it was meant to fund a massive increase in bus frequencies, subsidise fares, and generally make buses and trains more inviting throughout London.  The effect of the CCharge on road traffic throughout the capital is complex, so it’s questionable whether the “control” sites can be said to be unaffected by the intervention.

*** Before someone points it out, yes I know it’s a bit more complicated than that, but in this case the saying applies nicely.

Headline figures

If you haven’t done so already, start from this post and work your way forward.

In rare events like bicyclist injuries, odds ratios can be used as an approximation of relative risk: that is, how much a medical intervention changes the risk of a specific outcome.  An odds ratio of 0.3 is interpreted as a 70% reduction in risk of head injury when wearing a bicycle helmet.

The Cochrane Review looked at five studies, which contained a number of sub-analyses.  There was actually a range of odds ratios found when looking at different types of injury in different groups of cyclists.  In one, an odds ratio of 0.15 was reported.

So now the headline figure is that bicycle helmets protect against a whopping 85% of injuries.  Imagine the lives that could be saved.  Won’t somebody think of the children?  The 85% figure is constantly repeated by “Road Safety” spokesmen, and reported without context by journalists.  It’s cited by the British Medical Association in support for banning people from riding bicycles except when wearing helmets.  The 85% figure matters.

Leaving aside questions over whether the 85% figure represents the real efficacy of helmets, how useful is it as a guide for how to live our lives?  Well, as Ben Goldacre puts it: “you are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day.”  Nobody has ever died from a meteor falling on their head*.

What Ben is saying is that relative risk is only a useful number to communicate to the public if you also communicate the absolute risk.  If you want to know whether it’s worth acting to reduce the risk of something bad happening, you need to know how likely it is to happen in the first place.

In the UK, 104 cyclists died on the roads in 2009, according to DfT stats.  It was 115 in 2008, but the trend has been downwards for a long time.  For simplicity, lets say that in future years we could expect on average 100 cyclist deaths per year.  It’s really difficult to say how many cyclists there are in the UK, because you can define it in several different ways, and even then the data that we have is crap.  You can estimate how many bicycles there are, but these estimates vary, many bicycles might be out of use, and many of us own more than one.  You can take daily commuter modal share — which would give us 1 million cyclists — but there’s more to using a bicycle than commuting, and most people mix and match their modes.  According to the latest National Travel Survey, 14% of people use a bicycle for transport at least once per week.  An additional 4% cycle several times a month, and 4% again cycle at least once a month.  Cumulatively, 32% of the British people cycle at least sometimes, but some of those are too infrequent to be worth counting.  To be generous, and to keep the numbers simple, I’ll round it down to 16%, giving us 10 million on-road cyclists in the UK.  That means one in 100,000 cyclists is killed in cycling incident each year.

To put it another way, there’s a good chance you’ll get killed if you carry on cycling right up to your 100,000th birthday.  (If you do not first die in the inferno caused by the candles on the cake.)  Or, if when Adam and Eve first left Africa 200,000 years ago they had done so on bicycles, there is a good chance that at least one of them would be dead by now.  Alternatively, if you accept that life expectancy is around 80-90, make the unlikely assumption that all cyclists remain cyclists pretty much from cradle to grave, you might die cycling once in over a thousand lifetimes.  Nine-hundred and ninety-nine lifetimes in a thousand, you will die of something much more probable.  Like heart disease, or cancer.

But not everybody who dies on a bicycle dies of head injuries, and not everybody who dies of head injuries sustained while riding a bicycle would be helped by wearing a helmet.  The DfT/Transport Research Laboratory have done their own extensive review of the medical literature on helmets and say: “A forensic case by case review of over 100 British police cyclist fatality reports highlighted that between 10 and 16% of the fatalities reviewed  could have been prevented if they had worn an appropriate cycle helmet.”  This is because, while some form of head injury was involved in over half of cyclist fatalities, the head injury was usually in combination with one or more serious injury elsewhere on the body; and even in those where only the head sustained a serious injury, as often than not, it was of a type or severity that a helmet could not prevent.  There are, of course, many caveats and limitations of such an estimation, which relied on many assumptions, some amount of subjective judgement, and a limited dataset which was biased to the sort of cyclist fatalities that the police are interested in.  So we could be generous and round it up to 20% — that helps keep our numbers simple.

So we’re talking about about 20 lives saved per year, or in terms that matter to you, your life saved if you cycled for half a million years. Of course, a third of British cyclists already wear helmets, so we can add the number of cyclists whose lives are already being saved.  We could be generous again and say 40 lives per year.

That would give you a chance of less than 1 in 2,500 that, as a cradle-to-grave bicycle user, bicycling from nursery school to nursing home, you will die in a crash that a helmet would have protected against.  The chances are 2,499 in 2,500 that you will die of something else.  Like the 4 in 2,500 chances of being killed in a cycling incident where a helmet would not have helped.

Or the 6 in 2,500 chances of death by falling down stairs.  Or the 3 in 2,500 of being run over by a drunk driver.  Or the whopping 30 in 2,500 chances of dying of an air pollution related respiratory disease.**  Unfortunately I couldn’t find the British Medical Association’s policy on legal compulsion for users of stairs to wear appropriate personal protective equipment.

Of course, in addition to the 100 cyclists killed on British roads each year, another 1,000 suffer serious but non-fatal head injury, sometimes involving permanent and life-changing brain damage (as do users of stairs).  The Cochrane Review says that up to 850 of those injuries would be avoided or less severe if a helmet were worn; the more pessimistic TRL review says that perhaps 200 of them might be prevented or mitigated by an appropriate helmet.  Either way, we’re still in the area of many thousands of years spent cycling.

Whether you think those numbers make helmets worthwhile  it is up to you — I don’t think these numbers alone objectively prove that helmets are or are not worth using.  Just don’t be fooled by the stark headline-grabbing figure of 85% risk reduction.  When the absolute risk to begin with is smaller than that for fatally falling down the stairs, and a fraction of one percent of that for cancer and heart disease, consider whether risk reduction matters.

Of course, that might all change once we’ve looked at the next part of the story…

* I have not checked this fact, which I just made up, but I would be surprised to hear that it is not true.

** Hastily googled and calculated headline figures for illustrative purposes only; again, I have not thoroughly assessed these.

Final disclaimer: this is a hastily scribbled blog post, not an academic paper.  I’ve checked my zeroes and decimal places, but if I’ve overlooked something or accidentally written something to the wrong order of magnitude, please do point it out.

Killer cures

What kind of moron does not wear a helmet whilst riding a bike? Anyone that stupid deserves to have their brains scrapped off the road. —Dave, bloke commenting on the failed Melbourne bike share.

Cycle in London without a helmet?  You’d need your head examined… —Ross Lydall, Evening Standard transport correspondent.

The BMA, as a part of its policy to improve safe cycling supports compulsory wearing of cycle helmets when cycling for children and adults. —The British Medical Association

I know a lot of you find the whole helmets thing — whether they “help” or “work” or not — tiresome and unimportant.  Well tough.  Bicycle helmets are a medical intervention — a special kind of medical intervention — and whether or not medical interventions work and are worthwhile is always a fascinating subject.  More importantly, a large proportion of the general public and of journalists assume that helmets work, and the British Medical Association campaigns for compulsory bicycle helmet laws.  What the BMA does matters.  If the BMA endorses a medical intervention, we can’t dismiss arguments about it as tiresome and unimportant.

Archie Cochrane, the influential champion of modern evidence based medicine and one of history’s most underrated heroes, is said to have played a mischievous prank on colleagues.  In an age when doctor knew best, Cochrane managed to organise a randomised trial of two care regimens for recovering heart attack patients: extensive hospital care (which every doctor knew was what a heart attack patient needed) versus home care.  A few months into the trial he convened his colleagues in the monitoring group to break the bad news that eight home care patients had died versus four hospital care patients.  His colleagues’ fears had been proven correct: hospital treatment was clearly far superior to home treatment and the trial must be stopped immediately as it would simply be unethical to continue to subject patients to dangerous home care.  At which point Cochrane took another look at his notes and declared that, to his great embarrassment of course, he had misread his shorthand: eight hospital patients had died for only four home care patients.  After the awkward silence, the monitoring group all agreed that it was far too early to draw any conclusions from such small numbers and at such an early stage — it could be pure chance that more patients died in hospital care.  The trial went on and never did provide any evidence that hospital care is any better than home care.

It seems obvious that bicycle helmets are a good thing.  They save lives.  They prevent life-changing head injuries.  If your head is fast approaching concrete, you want something to intervene.  It’s common sense, right?  You’d be mad not to wear one.

But Cochrane and his fellow mid-20th century proponents of evidence-based medicine showed that facts do not always match common sense.  The obvious answer is not always the correct one.  The obvious common sense fact that hospital care is better than home care for recovering heart attack patients turned out not to be correct.  As a new generation of doctors recognised the importance of evidence-based medicine, randomised controlled trials were retrospectively carried out on nearly everything that doctors do.  And, oops, they discovered that a lot of practices that doctors had considered to be simple obvious common sense had actually been harming their patients, ruining lives and sometimes killing people.

For a long time I took a Pascal’s Wager on bike helmets: while I had been given various reasons to believe that even if there was a benefit from wearing one it was probably marginal, there was no good reason not to wear one.  But the lesson from Cochrane — that common sense can kill you — is that there could be a very good reason for not wearing one.  What if wearing a bicycle helmet actively increases your risk of injury and death while riding a bicycle?  We can’t just assume that it doesn’t.

How could bicycle helmets possibly be bad for you?  Concrete meets head: intervention surely a good thing?  As that great 21st century populariser of evidence-based medicine would say: I think you’ll find it’s a bit more complicated than that.  In helmets, as in most transport issues, we seem to be obsessed with the engineering and overlook the way that people behave.  Helmet efficacy is as much a question of psychology as it is physics.

Because the interesting aspect of helmet research is not so much how they affect your chances surviving an accident, but how they affect your chances of having an accident.  It all comes back to how road users behave, and there are reasons to believe that helmet use could change people’s behaviour in a way that increases the accident rate.  Many readers will already be familiar with the two most established lines of research: risk compensation and the safety-in-numbers effect.  I’ll look at those in more detail another time, but briefly, risk compensation proposes that we adjust our behaviour according to perceived risks — in this case, the cyclist wearing the helmet perceives himself to be at reduced risk, and happily cycles with less care; more importantly, the car, bus and truck drivers around him drive with less care.  The safety-in-numbers effect proposes that cyclists are safer when there are more cyclists on the road — both in that specific time and place, as other vehicles will have to slow and use caution around them; and in general, as other road users will be expecting to see cyclists and are more likely to know how to behave around them.  If the perception is that cycling is a dangerous extreme sport that requires a helmet, and if that perception puts people off cycling, then the safety-in-numbers effect is diminished.

It’s easy to dismiss these things without considering them: helmets are hard but simple; behaviour is soft but complicated.  It’s easier to go with common sense.  But common sense is often what bad science is made of, and common sense can kill you.

That doesn’t mean we can just assume that helmets are ineffective or bad.  With a medical intervention, you start from scratch, collect the data, and follow the evidence wherever it takes you.  This introductory post and its title are not supposed to bias our exploration of the evidence one way or another, only to get us beyond the unexamined assumption that helmets work.

So what’s the best evidence on bicycle helmets?  Named in honour of the pioneer Archie Cochrane, the Cochrane Collaboration systematically reviews the evidence for medical interventions.  A Cochrane Review looks carefully at all of the research that has been conducted on an intervention, considers the factors affecting the quality of each piece of research, and synthesises the results of all of the research to a conclusion which will generally be considered by medical practitioners to be the best knowledge we currently have on that intervention.  In a field that must always remain skeptical of the status quo and open to new evidence, a Cochrane Review is in practice considered to be the closest approximation we have to The Truth.  Good doctors don’t use their common sense, they use Cochrane Reviews.

The Cochrane Collaboration have reviewed the evidence for bicycle helmet efficacy.  This weekend, I’ve got half a dozen posts looking at that evidence, the way that it is presented by the Collaboration, and the evidence that the Collaboration has chosen to omit.

le in London without a helmet? You’d need your head examined…Cycle in London