Who are all these self-harming Dutch helmet wearers?

Martin Porter mentions a fun fact about helmet wearing:

Hans Voerknecht has been to a Velo-City conference in Vancover to explain why mandatory helmet laws are not such a great idea.  One of his statistics is that In the Netherlands, where cycling is ubiquitous, 13.3 per cent of the cyclists admitted to hospitals with injuries wore helmets — even though just 0.5 per cent cent of Dutch cyclists wear helmets.

This statistic is both utterly useless and extremely important. It tells us nothing about whether helmets are effective, ineffective or dangerous, but it does brilliantly illustrate the fact that the helmets issue is far from being a simple “no brainer”, and hints at one of the major flaws in the scientific studies of helmet efficacy.

Martin speculates on the reason for the interesting 30 times higher rate of hospitalisation amongst helmet wearers:

Maybe tourists from Anglo Saxon nations wearing helmets are disproportionately represented in the hospital statistics.  Maybe also those with helmets are perceived by motorists or perceive themselves to be less vulnerable.

In fact, it’s obvious who the helmet wearers are in the Netherlands.

Here’s a cyclist wearing a helmet:


while this bicycle user is helmet free:


These cyclists, ready for Saturday morning training, are wearing helmets, but the woman who has just passed them isn’t:


This cyclist is wearing a helmet:


This family out for a ride isn’t:


This cyclist is wearing a helmet:


This chap just has a cap:

tram in the trixie

This guy is wearing a helmet:


This one isn’t:


These cyclists are wearing helmets:


These folks aren’t:


These cyclists are wearing helmets:


And these aren’t:

bicycle path

Can you spot the difference? All of the helmeted cyclists are racing around, head down, feet firmly clamped to the pedals on fragile lightweight skinny tired bicycles — except for the one on a muddy knobbly tired mountainbike. Most of the helmet photos were taken at the weekend. Some of the others were too: a couple of gents leisurely touring the sand dunes in a nature reserve, and a family crossing Nesciobrug, perhaps off for a picnic in the country. But mostly they’re just people making everyday journeys: commuters in Amsterdam, shoppers in Utrecht, school kids in Houten. They’re on sturdy steady bicycles, rarely doing more than 15mph. Their environment is not completely without hazards, but even if things do go wrong, they’re extremely unlikely to find themselves hospitalised. The racers and mountainbikers, meanwhile, are far more likely to fall off or hit something, and at the sort of speeds where that breaks things.

The Dutch wear helmets — and get injured — when they’re doing sports. The Dutch don’t wear helmets when they’re using transport.

This is one of the major flaws in much of our research on helmets, and in much of the British approach to cycling. It fails to account for the differences between using a bicycle and participating in (extreme) sports.

Edited to add, in case it wasn’t clear — for I fear that too frequently in these posts I leave all of the background as taken, having been over it many times before — in the Netherlands these racers wearing helmets are the same people riding utility bikes without them. The folk who get dressed up in lycra and helmets to ride sports bikes at the weekend will, during the week, be riding a utility bike in normal clothes and no helmet, because that’s what the Dutch do. All of them. I mean, they don’t all do the racing, but they all have a utility bike. We don’t expect folk who enjoy a bit of rock climbing at the weekend to continue wearing their helmet all week, or people whose hobby is diving to keep the scuba tank on for the Monday morning commute.

Can drivers be taught a lesson?

M’coblogger Ed thinks there is a case for teaching drivers to behave — specifically by appeals to patriotism. Education programmes are a popular idea amongst cyclists, cash-strapped councils, and road safety types. I dismissed them as a solution that doesn’t work in my own post on revenge and road danger, but didn’t go into any detail. So I thought I better ask: what’s the best evidence we have about driver education programmes?

Remember what I said about bicycle helmets. It may be common sense that teaching drivers will make roads safer and nicer places to be, but common sense is frequently wrong, and cures can kill if they’re based on common sense rather than evidence. Trying to educate drivers could make the roads safer and nicer. It could be entirely ineffective. Or it could make them more dangerous and less pleasant. Until we conduct a controlled trial, we don’t know which.

There are two systematic reviews from the Cochrane Collaboration looking at the effectiveness of driver education programmes.  Cochrane reviews are, remember, the independent synthesis of everything that we know about a particular intervention, and are considered by doctors to be the closest thing we can ever get to fact.

The first Cochrane Review looks at the effectiveness of driver education in existing drivers. The schemes that have been trialled particularly focus on advanced driver training — the sort of programme that is designed to improve hazard detection and reduce error making, and which is frequently recommended for professional drivers — and on the remedial programmes that are increasingly offered to drivers who break the rules as an alternative to a driving ban.  These are lessons and lectures rather than marketing campaigns, but the remedial programmes — lectures on why speed limits matter — are particularly relevant to the “be nice” approach to making our streets nicer places where people feel able to ride bicycles.

The review found 24 trials from 1962 to 2002, all in the US except for one in Sweden, with more than 300,000 participants between them.  With those sorts of numbers, there is little chance of the review accidentally getting a false result.  Four were for advanced driving courses, the rest for remedial classes.  The programmes ranged from the simple supply of written material (9 trials) — a letter and copy of the rule book — through group lectures (16 trials) to proper one-on-one classes (7 trials), but all were designed to improve “driver performance and safety”.

The trials typically checked up on participants two years later and compared the rate of rule breaking and/or the rate of crashes in those who received the education programme and the controls who did not.  There was no difference. The education programmes didn’t stop drivers breaking the law or having crashes.  The authors concluded that companies shouldn’t bother with driving courses for their staff, but should let them take the train instead.

The evidence reviewed isn’t perfect. They could not, for example, blind participants as to whether they were in the study or control group. And the conclusions apply to the 32 specific advance driving courses and remedial classes that were trialled — we can not say for sure that other types of education campaign wouldn’t work. But the evidence tells us to at least be very wary of investing in any campaign strategy that relies on teaching people to play nice.

The second Cochrane review looks at the effectiveness of educating school kids before they start driving.  These are the sort of programmes that are supposed to address the fact that 17-21 year old drivers are twice as likely to crash as the average driver. They are particularly popular with the Road Safety industry and there are several varieties common in this country.  Indeed, I have first hand experience: it must have been during the final GCSE year, aged 15 or 16, that we were all taken to the Bovington tank training circuit to take it in turns driving hatchbacks (sadly no tanks) around the track, doing hill starts, three point turns, reverse parking, and, as a treat afterwards, emergency stops from 70mph. While not everybody is privileged enough to get real practical lessons, the government does at least make sure that kids are taught how to get a learner’s license and find an instructor, what tests they will need to take, and are given a few road safety messages.¹ *

The Cochrane review found three RCTs with a total of around 18,000 students. The review looked at the public health outcome of the trials, typically measured as the rate of crashes and/or violations in the first few years of holding a license. Giving school kids driving education did not reduce the incidence of crashes and violations.

Indeed, the authors, against common sense, found evidence of the opposite. The reason can be found in the other outcome that the trials measured: the time it took the kids from turning 17 (or whatever age was relevant in their particular locality) to passing their driving test (which the study gives the awful name “license delay”). Kids who were given driving classes at school were more likely to seek and obtain a license, and they did so earlier — and we already know that age correlates with crash rate and rule breaking (or at the very least, being caught and punished for rule breaking).  Driving classes in school weren’t making people drive safely, but they were making people drive.

You can see why driver education programmes are so popular with the road safety industry, puppet of the motoring lobby. The trials reviewed by Cochrane were all from the mid 1980s, yet we continue to put money and effort into programmes that are worse than useless. My own school driving lesson was fifteen years after school driving lessons were shown to be harmful to our health.

Whenever questioned, the government cites as justification its own non-controlled study which showed that kids are able to recall and are vaguely more likely to agree with specific road safety messages when asked three months after the lessons. No, really. That’s it.¹

So drivers can be taught. They can be taught, before they even become drivers, that driving is normal, just something that everybody does. The moment I turned 17 I wasted about a hundred quid on driving lessons before I stopped to ask myself why. Everybody was doing it, right? You do GCSEs at 16, driving at 17, ‘A’-levels at 18. That’s how it works.

Perhaps they can be taught to behave and we just haven’t worked out how yet. There are not, so far as I am aware, any trials on the effectiveness of making motorists try cycling on the roads. But I suspect even that would have limited effect, and maybe even that could backfire too.

Because people generally don’t do what they’re told to do, they do whatever looks normal and natural and easy. You can call that selfish and lazy if you like, but I don’t think that will help you understand or overcome the behaviour. In the UK it is normal and natural and easy to learn to drive and then drive badly. And people refuse to be taught that the things which are normal and natural and easy, the things that everybody around them is doing, are wrong. Experience trumps the word of others.

In the Netherlands, incidentally, cycling is normal and natural and, thanks to the infrastructure, easy. In the UK it’s none of those things. Make it easy and you’re nine tenths of the way to making it normal and natural.

Continue reading “Can drivers be taught a lesson?”

That’s not what I said, say scientists

According to SCIENTISTS, “pollution is not improved by c-charge.”  (“Improved”? These scientists are so sloppy with their language.)

Journalists all over the city are this week reporting that the congestion charge has not reduced air pollution problems in central London, and that’s a fact, proven by science.  (As far as I know, the CCharge was never about air pollution — the clue’s in the name. But it’s potentially an interesting thing to look at all the same.  I can invent in my head plausible hypotheses for why it would improve air quality, and why it wouldn’t, but both would be useless without evidence either way.)

Unfortunately, I’m having a little trouble finding out who these so-called scientists quoted as the source for the claim are.  I asked scientists on twitter, but they couldn’t remember making the statement.

What I can easily find is a set of documents (none of them making the claim) reviewing work that explores a potential link between the CCharge and air pollution.  The documents are not new research published as peer reviewed articles in a scientific journal.  They are a “research report” — a King’s College academic’s review of what we know about the CCharge and air pollution — coupled with commentary and a press release.  The documents are all commissioned and published by the “Health Effects Institute“,

a nonprofit corporation chartered in 1980 as an independent research organization to provide high-quality, impartial, and relevant science on the health effects of air pollution. Typically, HEI receives half of its core funds from the US Environmental Protection Agency and half from the worldwide motor vehicle industry.

And that’s fine.  If the content is good, it doesn’t matter who funded it or where it was published.  I’m merely establishing exactly who is saying what.  The exact people are:

  • Professor Frank Kelly, an environmental health researcher specialising in air pollution, who (as leader of an independent group of scientists) wrote the comprehensive research report reviewing the evidence.
  • HEI’s Health Review Committee, who wrote a short commentary on Kelly’s research report.
  • HEI’s press office, who wrote the press release, which is the only thing that most journalists read.

The main line of research reviewed by Kelly looked at roadside and background levels of nitrogen oxides (NOx), carbon monoxide (CO) and small particulates (PM10).  The data compared the change (if any) in these pollutants at locations within the CCharge zone from a few years before implementation to a few years after implementation.  It did the same for control locations in London but outside of the CCharge zone, to account for any unrelated trends in air pollution.

Kelly’s report concluded that there was no evidence of a CCharge effect on roadside levels of NOx; a complicated effect on background levels of NOx (whereby one type was marginally reduced and another type increased, especially near the boundary of the zone); but a marginal reduction in carbon monoxide and a reduction in particulates becoming more pronounced the closer one gets to the CCharge zone.  So the overall conclusion is that there is a small amount of evidence to indicate that the CCharge has made a small reduction to air pollution (the exact opposite of the claim attributed to “scientists” in the headlines).  However, the data was extremely limited — in some cases to single data points — and Kelly’s report doesn’t put much weight on any of the conclusions.

Even where there is sufficient data, Kelly’s report indicates that there are limitations to what this kind of data can say about the CCharge effects.  The CCharge zone is very small, he points out, and our atmosphere somewhat fluid: the air in London blows around and mixes, so even with sufficient data, this study design is not an optimal way to answer questions about the CCharge.* **

All of these limitations in study design and data quantity are reflected in the Health Review Committee’s short commentary on the report:

Ultimately, the Review Committee concluded that the investigators, despite their considerable effort to study the impact of the London CCS, were unable to demonstrate a clear effect of the CCS either on individual air pollutant concentrations or on the oxidative potential of PM10. The investigators’ conclusion that the primary and exploratory analyses collectively indicate a weak effect of the CCS on air quality should be viewed cautiously. The results were not always consistent and the uncertainties surrounding them were not always clearly presented, making it difficult to reach definitive conclusions.

Which is to say: the research so far isn’t really capable of answering any questions satisfactorily.  While the evidence is for a small improvement in air quality thanks to the CCharge, none of the evidence is very good.  They go on to make the academic’s favourite conclusion: more research is necessary.

That’s right, this is a 121 page research review with associated commentary which simply concludes that the existing data is insufficient to tell us anything useful at all.  That’s no criticism of Kelly or the HEI.  They set out to review the evidence; the evidence just happens to be severely limited.

The Health Effects Institute decided to press release this.  “Study finds little evidence of air quality improvements from London congestion charging scheme,” the press release screams in bold caps.  “Pollution not improved by C-Charge,” says Londonist. Can you spot the difference between the HEI press release and the Londonist headline?

There is an old saying that absence of evidence is not evidence of absence.***  It’s a classic source of bad science and bad journalism, and in this case it nicely sums up what is wrong with the Londonist piece.  A review which actually found very weak evidence that the CCharge improved air quality is covered as a study which found hard proof of the exact opposite.

* Indeed, Boris Johnson would like to blame all of the city’s problems on clouds blowing in from the continent rather than the motor vehicles that account for most of it.

** I could add to this limitation the fact that the CCharge was not merely meant to cut car use within the zone: it was meant to fund a massive increase in bus frequencies, subsidise fares, and generally make buses and trains more inviting throughout London.  The effect of the CCharge on road traffic throughout the capital is complex, so it’s questionable whether the “control” sites can be said to be unaffected by the intervention.

*** Before someone points it out, yes I know it’s a bit more complicated than that, but in this case the saying applies nicely.

Would a helmet help if hit by a car?

This post is part of a series: it starts with the intro to the helmets issue, then the summary of the best evidence on helmets, then a quick diversion into how dangerous cycling is and an attempt to define terms. And there’s more…

Brake, the “Road Safety” charity, say yes:

Helmets are effective for cyclists of all ages, in crashes which do and do not involve another vehicle.

That matters, because if cycling safety is in the news, journalists will go to Brake for an easy quote.

The British Medical Association also say yes:

Helmets provide equal level of protection from cars (69%) compared to other causes (65%)

This is important, because the BMA is a highly trusted organisation with political influence, and their current policy is to endorse the criminalisation of riding a bicycle when not wearing a helmet.

Interestingly, president of the Automobile Association Edmund King, who was giving away free advertising bicycle helmets in London this week, disagrees with the nation’s medics on both issues:

We don’t think helmets should be compulsory but we think there are benefits… Our view is that helmets do not protect against cars but they may protect against some of the 2.2m potholes which often are the cause of smashes into the ground by cyclists.

Carlton Reid adds a little detail:

Most bicycle helmets are designed for falls to the ground from one metre at speeds of 12mph. They offer almost zero protection in collisions between bicycles and fast-moving cars.

The risk reduction provided by helmets in bicycle crashes that do and do not involve motor vehicles is one of the few sub-group analyses that was performed in the case-control studies that are covered by the Cochrane Review, and it’s no surprise that this is the source for the BMA’s claim. In bicycle hospitalisations that did not involve cars it reported nearly 70% fewer head injuries in the helmet wearers. In bicycle hospitalisations that did involve motor vehicles there were nearly 70% fewer head injuries in helmet wearers.  A helmet is equally effective at preventing head and brain injury in crashes with cars as in solo crashes.

What makes Edmund King and Carlton Reid think they know better than the nation’s medics and road safety campaigners?  Indeed, what makes them think that they can go around claiming the opposite of the cold hard corroborated stats of the Cochrane review?

Well actually, they’re not. Not quite. King and Reid are judging helmet efficacy by a slightly different metric to the Cochrane Review.  The Cochrane Review is the looking at the set of bicyclists who have had an accident of a severity that hospitalises but does not kill outright.  The review says nothing about deaths, for example, and as the Cochrane Review itself notes, more than 90% of cyclist deaths are caused by “collisions” involving moving motor vehicles (the same proportion is found again by a separate route in the TRL review and again in NYC).  But only 25% of hospitalisations were caused by motor vehicles.  And while Cochrane suggested a whopping 85% of head injury hospitalisations (which in turn account for around half of all cyclist hospitalisations) could be avoided by wearing a helmet, the TRL review of post-mortem reports found that only 10-16% of all cyclist deaths might have been avoided.  Hospitalisations, of the sort reviewed by Cochrane, are not representative of deaths.  Fall off your bicycle and you might get hurt.  Get hit by a car and you might die.

That’s because when you fall off your bicycle, chances are you are toppling over some way — precisely the sort of simple fall that a helmet is designed for, and the sort of fall that is least likely to cause life-threatening injury to any other part of the body.  When hit by a car the body might be crushed, or thrown up and around at speeds that helmets are not designed for, and so there are many more opportunities to suffer fatal trauma to other parts of the body.

(As an aside, Brake actually get this one the wrong way ’round:

Nearly 50% of cyclist admissions to hospital are for head and facial injuries, and the majority of cyclist deaths and injuries are a result of head injury.

TRL has the answer to this one: around three quarters of cyclist fatalities did indeed involve a serious head injury.  But only about a quarter involved only a serious head injury.  The rest also involved one or more additional life-threatening injury.  The Brake claim is at best misleading.)

This doesn’t mean that the BMA and Brake are all wrong* and King and Reid are completely correct.  A car at speed may be able to cause the sort of multiple trauma that merely falling over doesn’t, but that doesn’t mean that cars aren’t also capable of causing the sort of crashes that helmets are designed for, especially in low speed city traffic.

So Edmund King is wrong**.  But within the untruth he is communicating an important truth: cars are responsible for the most serious injuries and death, and helmets will rarely help in those cases.

Brake and the BMA are correct.  But their strictly truthful statements hide the crucial details, without which they are liable to seriously mislead.

* Indeed, they can’t be wrong.  You can provide a hypothesis for why helmets might be useless in crashes with cars, but no hypothesis can trump the real world stats that say helmets are useful in crashes with cars.

** Carlton Reid is not wrong, because he specified fast-moving cars.

What is a bicyclist?

This post is part of a series: it starts with the intro to the helmets issue, then the summary of the best evidence on helmets, then a quick diversion into how dangerous cycling is. And it won’t end here…

A good review of a medical intervention starts by explaining the population being studied.  The Cochrane review of helmets for preventing head injuries in bicyclists explains that its population is the set of bicyclists who sustained an injury that was sufficiently major to make them go to the ER for treatment (and not sufficient to kill them before they could seek treatment).

The review does not explain what they mean by a bicyclist.  (And since the original papers under review are closed-access, behind an extortionate paywall, we can’t know whether those do.)  Presumably they mean people riding a bicycle at the time that they sustained their injury.

Is that people riding their bicycle leisurely along a rail trail or towpath?

Is that people touring, head down into the wind in the deserted mountains?

Is that people racing in a peloton down the dual carriageway?

Is that kids in the BMX park?

Is that mountainbikers on the downhill courses?

Is it businessmen on their Bromptons riding through the stop-start city traffic?  Old ladies bouncing down cobbled streets on their step-through upright bikes?  Village vicars doing their rounds?

Mountainbikers, city commuters, and rail trail riders are very different people exposed to completely differently environments and risks — and who have very different helmet-wearing and hospitalisation rates.  Lumping them all together is like lumping mountain hikers, sprinters, traceurs, marathon runners, city pedestrians and country footpath strollers together under the heading “walkers”.  But lump them together is exactly what the studies in the Cochrane review do, comparing the rate of head injury (as a proportion of all injuries) in helmet wearers and non-helmet wearers, and applying the results to make the recommendation that everybody should be made to wear a helmet while riding a bicycle, whatever their style and situation.  You may as well recommend Formula 1 safety gear for the drive to the supermarket.

Perhaps helmets help prevent head injuries in all people who use bicycles.  Perhaps it helps mountainbikers more than tourists.  Perhaps it’s the other way around.  We don’t know.  We could know.  The researchers could have made sure to collect the data (perhaps the data is even already there, in the medical records) and then done sub-group analyses on their data to give individual results for separate groups of bicyclists.  But they didn’t.  Why not?  Did it just not occur to them that “bicycling” might not be a single pursuit?  Or did they just assume that it didn’t matter, or that nobody would notice?  Either way, it amounts to a pretty serious limitation when you’re asking “should we legislate to ban all kinds of bicycle use except where the bicycle user is wearing a helmet?”

Headline figures

If you haven’t done so already, start from this post and work your way forward.

In rare events like bicyclist injuries, odds ratios can be used as an approximation of relative risk: that is, how much a medical intervention changes the risk of a specific outcome.  An odds ratio of 0.3 is interpreted as a 70% reduction in risk of head injury when wearing a bicycle helmet.

The Cochrane Review looked at five studies, which contained a number of sub-analyses.  There was actually a range of odds ratios found when looking at different types of injury in different groups of cyclists.  In one, an odds ratio of 0.15 was reported.

So now the headline figure is that bicycle helmets protect against a whopping 85% of injuries.  Imagine the lives that could be saved.  Won’t somebody think of the children?  The 85% figure is constantly repeated by “Road Safety” spokesmen, and reported without context by journalists.  It’s cited by the British Medical Association in support for banning people from riding bicycles except when wearing helmets.  The 85% figure matters.

Leaving aside questions over whether the 85% figure represents the real efficacy of helmets, how useful is it as a guide for how to live our lives?  Well, as Ben Goldacre puts it: “you are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day.”  Nobody has ever died from a meteor falling on their head*.

What Ben is saying is that relative risk is only a useful number to communicate to the public if you also communicate the absolute risk.  If you want to know whether it’s worth acting to reduce the risk of something bad happening, you need to know how likely it is to happen in the first place.

In the UK, 104 cyclists died on the roads in 2009, according to DfT stats.  It was 115 in 2008, but the trend has been downwards for a long time.  For simplicity, lets say that in future years we could expect on average 100 cyclist deaths per year.  It’s really difficult to say how many cyclists there are in the UK, because you can define it in several different ways, and even then the data that we have is crap.  You can estimate how many bicycles there are, but these estimates vary, many bicycles might be out of use, and many of us own more than one.  You can take daily commuter modal share — which would give us 1 million cyclists — but there’s more to using a bicycle than commuting, and most people mix and match their modes.  According to the latest National Travel Survey, 14% of people use a bicycle for transport at least once per week.  An additional 4% cycle several times a month, and 4% again cycle at least once a month.  Cumulatively, 32% of the British people cycle at least sometimes, but some of those are too infrequent to be worth counting.  To be generous, and to keep the numbers simple, I’ll round it down to 16%, giving us 10 million on-road cyclists in the UK.  That means one in 100,000 cyclists is killed in cycling incident each year.

To put it another way, there’s a good chance you’ll get killed if you carry on cycling right up to your 100,000th birthday.  (If you do not first die in the inferno caused by the candles on the cake.)  Or, if when Adam and Eve first left Africa 200,000 years ago they had done so on bicycles, there is a good chance that at least one of them would be dead by now.  Alternatively, if you accept that life expectancy is around 80-90, make the unlikely assumption that all cyclists remain cyclists pretty much from cradle to grave, you might die cycling once in over a thousand lifetimes.  Nine-hundred and ninety-nine lifetimes in a thousand, you will die of something much more probable.  Like heart disease, or cancer.

But not everybody who dies on a bicycle dies of head injuries, and not everybody who dies of head injuries sustained while riding a bicycle would be helped by wearing a helmet.  The DfT/Transport Research Laboratory have done their own extensive review of the medical literature on helmets and say: “A forensic case by case review of over 100 British police cyclist fatality reports highlighted that between 10 and 16% of the fatalities reviewed  could have been prevented if they had worn an appropriate cycle helmet.”  This is because, while some form of head injury was involved in over half of cyclist fatalities, the head injury was usually in combination with one or more serious injury elsewhere on the body; and even in those where only the head sustained a serious injury, as often than not, it was of a type or severity that a helmet could not prevent.  There are, of course, many caveats and limitations of such an estimation, which relied on many assumptions, some amount of subjective judgement, and a limited dataset which was biased to the sort of cyclist fatalities that the police are interested in.  So we could be generous and round it up to 20% — that helps keep our numbers simple.

So we’re talking about about 20 lives saved per year, or in terms that matter to you, your life saved if you cycled for half a million years. Of course, a third of British cyclists already wear helmets, so we can add the number of cyclists whose lives are already being saved.  We could be generous again and say 40 lives per year.

That would give you a chance of less than 1 in 2,500 that, as a cradle-to-grave bicycle user, bicycling from nursery school to nursing home, you will die in a crash that a helmet would have protected against.  The chances are 2,499 in 2,500 that you will die of something else.  Like the 4 in 2,500 chances of being killed in a cycling incident where a helmet would not have helped.

Or the 6 in 2,500 chances of death by falling down stairs.  Or the 3 in 2,500 of being run over by a drunk driver.  Or the whopping 30 in 2,500 chances of dying of an air pollution related respiratory disease.**  Unfortunately I couldn’t find the British Medical Association’s policy on legal compulsion for users of stairs to wear appropriate personal protective equipment.

Of course, in addition to the 100 cyclists killed on British roads each year, another 1,000 suffer serious but non-fatal head injury, sometimes involving permanent and life-changing brain damage (as do users of stairs).  The Cochrane Review says that up to 850 of those injuries would be avoided or less severe if a helmet were worn; the more pessimistic TRL review says that perhaps 200 of them might be prevented or mitigated by an appropriate helmet.  Either way, we’re still in the area of many thousands of years spent cycling.

Whether you think those numbers make helmets worthwhile  it is up to you — I don’t think these numbers alone objectively prove that helmets are or are not worth using.  Just don’t be fooled by the stark headline-grabbing figure of 85% risk reduction.  When the absolute risk to begin with is smaller than that for fatally falling down the stairs, and a fraction of one percent of that for cancer and heart disease, consider whether risk reduction matters.

Of course, that might all change once we’ve looked at the next part of the story…

* I have not checked this fact, which I just made up, but I would be surprised to hear that it is not true.

** Hastily googled and calculated headline figures for illustrative purposes only; again, I have not thoroughly assessed these.

Final disclaimer: this is a hastily scribbled blog post, not an academic paper.  I’ve checked my zeroes and decimal places, but if I’ve overlooked something or accidentally written something to the wrong order of magnitude, please do point it out.

So what’s the best evidence we have on bicycle helmets?

According to the Cochrane Collaboration — the source that most doctors will go to for their summary of the evidence — it is five studies from the 1980s and 1990s.

The Cochrane Review set out to answer a very specific question: “in the set of people who sought Emergency Room treatment having had a bicycle crash, did wearing a bicycle helmet correlate with the rate of head and brain injuries among the patients?”  These are important details — the question was not “in the entire set of people who ride bicycles, does wearing a bicycle helmet affect mortality, life expectancy, the rate of serious injury, or injury recovery?”  It’s not a bad question that the researchers are asking, but it is a very limited question — the data is restricted to the type of injury that is serious enough to send people to hospital, but not serious enough to kill outright; it does not ask whether helmets correlate with any other types of injury beyond head and brain (more later); and it can say nothing about whether helmet wearers and non-helmet wearers differ in their behaviour or exposure to risk of the type of accident that sends them to hospital in the first place.  The latter possibility turns out to be a very interesting one, which will be explored later.

The Cochrane Review searched the existing medical literature for high quality studies that were theoretically capable of answering their question.  There are several different ways that one could design a study to answer a question like the Cochrane question — some methods more reliable than others.  The Cochrane Review found five studies described in seven papers, all with the same design: case-control studies.  This study design looks at a set of people who have been hospitalised with head injuries while riding a bicycle and examines their records to find out whether whether more or fewer of them were wearing a helmet than a similar set of cyclists who were hospitalised at roughly the same time and place but whose injuries were not head injuries.

Case-control studies have a number of limitations that make them less reliable than other study designs, like the gold-standard randomised controlled trial design.  Principally, the study must merely make the assumption that the “case” population and “control” populations are essentially the same, differing only in the intervention tested (helmets) and potentially in the outcome of interest (head injury as a proportion of all injuries).  The method accepts that there may be other differences between the populations of patients (are helmet wearers on average richer, middle class, more likely to have health insurance, I wonder?), but makes the assumption that those differences are not important to the question being addressed, and so uses statistical methods to attempt to minimise their effect on the results.  For this reason, case control studies are considered to be relatively weak evidence, and when more rigorous trials are conducted, they often find that case control studies exaggerate the effects of interventions.  A good Cochrane review will carefully pick the studies that it includes, eliminating case control studies unless they do everything possible to minimise the limitations of the design, and this review appears to have done that.

The populations in the five studies reviewed were 1040 cyclists hospitalised in Cambridge UK in 1992; 1710 cyclists in Melbourne in the late 1980s; 445 child cyclists in Brisbane in the early 1990s; 668 cyclists in Seattle in the late 1980s; and a further 3390 in Seattle in the early 1990s.  The results therefore apply to the shape, style and construction of the helmets that were on the market in the mid-1980s to early 1990s, and to the types of people who were choosing to wear helmets at that time.  (The Seattle study, completed in 1994, does look specifically at”hard shell”, “soft shell” and “no shell” helmets, finding the same result for all three).  Note that the Cochrane review was assessed as “up to date” in 2006, meaning that the authors do not believe that there is any good quality data newer than the early 1990s.  I’ll let you decide whether these studies are relevant to your own 2010-model helmet or not.

The outcome of the case control study is the odds ratio — a measure of the strength of association between the intervention and the outcome, i.e., how big an affect the intervention appears to be having, and whether it appears to be helping or harming.  It’s literally the ratio of the odds of head injury in hospitalised helmet wearers to the odds of head injury in hospitalised non-helmet wearers.  So an OR of 1 would mean that the odds of head injury were equal, while an OR higher than 1 would mean that hospitalised helmet wearers had a higher rate of head injury than hospitalised non-helmet wearers and an OR lower than 1 would mean that helmet wearers had the lower rate of head injury.

The five studies under review all agreed on odds ratios in the region of 0.3, meaning that hospitalised helmet wearers had considerably fewer head and brain injuries than hospitalised non-helmet wearers.  It’s a significant result.  Not something that often happens by chance — especially repeated in five different studies.  In the set of cyclists who turned up at the Emergency Room, there was a strong correlation between whether one wore a helmet and whether one had a head injury.

That, according to the Cochrane Collaboration, is the best evidence that we have on bicycle helmets.  In the population of hospitalised cyclists in four cities in the late 1980s and early 1990s, there was a significantly higher rate of head and brain injury in those who were not wearing a helmet.  Nothing about mortality or life expectancy.  Nothing about injury recovery.  Nothing about injury and hospitalisation rates in the whole population of cyclists.  That’s not a criticism of the Cochrane Collaboration or it’s review: they are reviewing the best evidence we have.

Evidence that is, apparently, sufficient for the British Medical Association to campaign for compulsory use of a medical intervention.

Those are just the obvious limitations of the question being asked and the study design used to answer it.  The less obvious limitations are where it really gets interesting.

Killer cures

What kind of moron does not wear a helmet whilst riding a bike? Anyone that stupid deserves to have their brains scrapped off the road. —Dave, bloke commenting on the failed Melbourne bike share.

Cycle in London without a helmet?  You’d need your head examined… —Ross Lydall, Evening Standard transport correspondent.

The BMA, as a part of its policy to improve safe cycling supports compulsory wearing of cycle helmets when cycling for children and adults. —The British Medical Association

I know a lot of you find the whole helmets thing — whether they “help” or “work” or not — tiresome and unimportant.  Well tough.  Bicycle helmets are a medical intervention — a special kind of medical intervention — and whether or not medical interventions work and are worthwhile is always a fascinating subject.  More importantly, a large proportion of the general public and of journalists assume that helmets work, and the British Medical Association campaigns for compulsory bicycle helmet laws.  What the BMA does matters.  If the BMA endorses a medical intervention, we can’t dismiss arguments about it as tiresome and unimportant.

Archie Cochrane, the influential champion of modern evidence based medicine and one of history’s most underrated heroes, is said to have played a mischievous prank on colleagues.  In an age when doctor knew best, Cochrane managed to organise a randomised trial of two care regimens for recovering heart attack patients: extensive hospital care (which every doctor knew was what a heart attack patient needed) versus home care.  A few months into the trial he convened his colleagues in the monitoring group to break the bad news that eight home care patients had died versus four hospital care patients.  His colleagues’ fears had been proven correct: hospital treatment was clearly far superior to home treatment and the trial must be stopped immediately as it would simply be unethical to continue to subject patients to dangerous home care.  At which point Cochrane took another look at his notes and declared that, to his great embarrassment of course, he had misread his shorthand: eight hospital patients had died for only four home care patients.  After the awkward silence, the monitoring group all agreed that it was far too early to draw any conclusions from such small numbers and at such an early stage — it could be pure chance that more patients died in hospital care.  The trial went on and never did provide any evidence that hospital care is any better than home care.

It seems obvious that bicycle helmets are a good thing.  They save lives.  They prevent life-changing head injuries.  If your head is fast approaching concrete, you want something to intervene.  It’s common sense, right?  You’d be mad not to wear one.

But Cochrane and his fellow mid-20th century proponents of evidence-based medicine showed that facts do not always match common sense.  The obvious answer is not always the correct one.  The obvious common sense fact that hospital care is better than home care for recovering heart attack patients turned out not to be correct.  As a new generation of doctors recognised the importance of evidence-based medicine, randomised controlled trials were retrospectively carried out on nearly everything that doctors do.  And, oops, they discovered that a lot of practices that doctors had considered to be simple obvious common sense had actually been harming their patients, ruining lives and sometimes killing people.

For a long time I took a Pascal’s Wager on bike helmets: while I had been given various reasons to believe that even if there was a benefit from wearing one it was probably marginal, there was no good reason not to wear one.  But the lesson from Cochrane — that common sense can kill you — is that there could be a very good reason for not wearing one.  What if wearing a bicycle helmet actively increases your risk of injury and death while riding a bicycle?  We can’t just assume that it doesn’t.

How could bicycle helmets possibly be bad for you?  Concrete meets head: intervention surely a good thing?  As that great 21st century populariser of evidence-based medicine would say: I think you’ll find it’s a bit more complicated than that.  In helmets, as in most transport issues, we seem to be obsessed with the engineering and overlook the way that people behave.  Helmet efficacy is as much a question of psychology as it is physics.

Because the interesting aspect of helmet research is not so much how they affect your chances surviving an accident, but how they affect your chances of having an accident.  It all comes back to how road users behave, and there are reasons to believe that helmet use could change people’s behaviour in a way that increases the accident rate.  Many readers will already be familiar with the two most established lines of research: risk compensation and the safety-in-numbers effect.  I’ll look at those in more detail another time, but briefly, risk compensation proposes that we adjust our behaviour according to perceived risks — in this case, the cyclist wearing the helmet perceives himself to be at reduced risk, and happily cycles with less care; more importantly, the car, bus and truck drivers around him drive with less care.  The safety-in-numbers effect proposes that cyclists are safer when there are more cyclists on the road — both in that specific time and place, as other vehicles will have to slow and use caution around them; and in general, as other road users will be expecting to see cyclists and are more likely to know how to behave around them.  If the perception is that cycling is a dangerous extreme sport that requires a helmet, and if that perception puts people off cycling, then the safety-in-numbers effect is diminished.

It’s easy to dismiss these things without considering them: helmets are hard but simple; behaviour is soft but complicated.  It’s easier to go with common sense.  But common sense is often what bad science is made of, and common sense can kill you.

That doesn’t mean we can just assume that helmets are ineffective or bad.  With a medical intervention, you start from scratch, collect the data, and follow the evidence wherever it takes you.  This introductory post and its title are not supposed to bias our exploration of the evidence one way or another, only to get us beyond the unexamined assumption that helmets work.

So what’s the best evidence on bicycle helmets?  Named in honour of the pioneer Archie Cochrane, the Cochrane Collaboration systematically reviews the evidence for medical interventions.  A Cochrane Review looks carefully at all of the research that has been conducted on an intervention, considers the factors affecting the quality of each piece of research, and synthesises the results of all of the research to a conclusion which will generally be considered by medical practitioners to be the best knowledge we currently have on that intervention.  In a field that must always remain skeptical of the status quo and open to new evidence, a Cochrane Review is in practice considered to be the closest approximation we have to The Truth.  Good doctors don’t use their common sense, they use Cochrane Reviews.

The Cochrane Collaboration have reviewed the evidence for bicycle helmet efficacy.  This weekend, I’ve got half a dozen posts looking at that evidence, the way that it is presented by the Collaboration, and the evidence that the Collaboration has chosen to omit.

le in London without a helmet? You’d need your head examined…Cycle in London

When I see a medical statistician on a bicycle…

…I do not despair for the future of the human race.

In my day job I work for scientific and medical journals, a million miles from transport and planning policy.  Except that this week I was pleasantly surprised to find that one of our papers was on all the bike blogs.  (I had nothing to do with the paper, and didn’t even notice it until it was published.)  In BMC Public Health, Andrei Morgan and colleagues have done what we at AWWTM love: taken the best methods that we have for evaluating evidence and applied them to the topic of London cycling; specifically they have described the data on cyclists killed in London traffic.

The interesting factoids are:

  • While annual death rates remained relatively static over time, when considered against a background growth in estimated cycling kilometres from 0.85% to 1.48% of the total estimated traffic kms in London, they have fallen considerably in terms of deaths per estimated cyclist km.
  • Three quarters of the killings were on or at junctions with main roads.
  • Women were more likely to be killed in inner-London and during daylight; Men got killed day and night throughout London.  Men accounted for more of the total deaths, but the authors did not normalise any of these data, so we can’t say whether it’s because men are more incompetent or women more safety conscious, whether drivers behave differently around them (as Ian Walker previously found), whether men are more likely to be using the main roads where crashes happen, or whether there are simply more men cycling, especially in the outer boroughs and at night.
  • 40% of the killings were in the outer boroughs, despite there being much lower levels of cycling there.  The two are probably not unconnected: the outer boroughs have bigger faster roads and fewer cycling facilities.  The inner boroughs have slower speeds and “safety in numbers”.  People who think that London’s main roads are dangerous places are not entirely stupid.
  • There were five reported incidents in which “only the cyclist was involved”.  Presumably people riding in lampposts, or “just falling off“?
  • And of course, the most timely finding: over 40% of incidents involved freight vehicles, half of those on left-turns.

The authors — themselves London bicycle users at the School of Hygiene & Tropical Medicine in Bloomsbury — conclude that trucks should be banned from central London.

The paper is published coincidentally (it was written over a year ago, before peer review) in the same week that Dennis Putz was sentenced for killing Catriona Patel with a truck at Oval last year, and that Boris Johnson promoted his own ideas about banning HGVs from central London (despite delaying the LEZ which might have helped a bit), amongst many other events that have highlighted the problem of trucks in central London this autumn.  So to an extent the paper only adds more weight to a conclusion that most of us had already reached, through previous studies and through our own amateur observation and experience, and for other reasons additional to safety issues: that trucks do not belong in city centres.

Rather, the important message that I got from the paper was to highlight just how poor the evidence-base for cycling safety policy is.  The authors repeatedly had to acknowledge and apologise for the limitations of their work — in the places where I, in my lowly blog post, can speculate wildly about possible explanations for the authors’ observations, the authors themselves must stay silent because there simply isn’t good enough data on things like the characteristics of London bicycle users and their bicycle journeys.  It’s an issue that we keep coming across, in arguments over bicycle helmets, segregated infrastructure, and every other policy, intervention, and initiative: the documented evidence rarely approaches the quality necessary for making important decisions on important policies.

We at AWWTM are very much of the (evidence-based) opinion that a policy or intervention isn’t worth pursuing unless it is informed by evidence of the way that the world works, and how it might affect the way that the world works.  And it depresses us that this even needs saying, but it seems that many politicians and planners are happy to dogmatically follow policies that have been shown to fail, and to implement new ones without doing anything to check that they are working.

More of that later.

at junctionsThe important o