Tag Archives: medical statistics

Who are all these self-harming Dutch helmet wearers?

Martin Porter mentions a fun fact about helmet wearing. (Unfortunately, he uses Blogspot, which, despite appearances, doesn’t do commenting.)

Hans Voerknecht has been to a Velo-City conference in Vancover to explain why mandatory helmet laws are not such a great idea.  One of his statistics is that In the Netherlands, where cycling is ubiquitous, 13.3 per cent of the cyclists admitted to hospitals with injuries wore helmets — even though just 0.5 per cent cent of Dutch cyclists wear helmets.

This statistic is both utterly useless and extremely important. It tells us nothing about whether helmets are effective, ineffective or dangerous, but it does brilliantly illustrate the fact that the helmets issue is far from being a simple “no brainer”, and hints at one of the major flaws in the scientific studies of helmet efficacy.

Martin speculates on the reason for the interesting 30 times higher rate of hospitalisation amongst helmet wearers:

Maybe tourists from Anglo Saxon nations wearing helmets are disproportionately represented in the hospital statistics.  Maybe also those with helmets are perceived by motorists or perceive themselves to be less vulnerable.

In fact, it’s obvious who the helmet wearers are in the Netherlands.

Here’s a cyclist wearing a helmet:


while this bicycle user is helmet free:


These cyclists, ready for Saturday morning training, are wearing helmets, but the woman who has just passed them isn’t:


This cyclist is wearing a helmet:


This family out for a ride isn’t:


This cyclist is wearing a helmet:


This chap just has a cap:

tram in the trixie

This guy is wearing a helmet:


This one isn’t:


These cyclists are wearing helmets:


These guys aren’t:


These cyclists are wearing helmets:


And these aren’t:

bicycle path

Can you spot the difference? All of the helmeted cyclists are racing around, head down, feet firmly clamped to the pedals on fragile lightweight skinny tired bicycles — except for the one on a muddy knobbly tired mountainbike. Most of the helmet photos were taken at the weekend. Some of the others were too: a couple of gents leisurely touring the sand dunes in a nature reserve, and a family crossing Nesciobrug, perhaps off for a picnic in the country. But mostly they’re just people making everyday journeys: commuters in Amsterdam, shoppers in Utrecht, school kids in Houten. They’re on sturdy steady bicycles, rarely doing more than 15mph. Their environment is not completely without hazards, but even if things do go wrong, they’re extremely unlikely to find themselves hospitalised. The racers and mountainbikers, meanwhile, are far more likely to fall off or hit something, and at the sort of speeds where that breaks things.

This is one of the major flaws in much of our research on helmets, and in much of the British approach to cycling. It fails to account for the differences between using a bicycle and participating in (extreme) sports.

Edited to add, in case it wasn’t clear — for I fear that too frequently in these posts I leave all of the background as taken, having been over it many times before — in the Netherlands these racers wearing helmets are the same people riding utility bikes without them. The folk who get dressed up in lycra and helmets to ride sports bikes at the weekend will, during the week, be riding a utility bike in normal clothes and no helmet, because that’s what the Dutch do. All of them. I mean, they don’t all do the racing, but they all have a utility bike. We don’t expect folk who enjoy a bit of rock climbing at the weekend to continue wearing their helmet all week, or people whose hobby is diving to keep the scuba tank on for the Monday morning commute.

Can drivers be taught a lesson?

M’coblogger Ed thinks there is a case for teaching drivers to behave — specifically by appeals to patriotism. Education programmes are a popular idea amongst cyclists, cash-strapped councils, and road safety types. I dismissed them as a solution that doesn’t work in my own post on revenge and road danger, but didn’t go into any detail. So I thought I better ask: what’s the best evidence we have about driver education programmes?

Remember what I said about bicycle helmets. It may be common sense that teaching drivers will make roads safer and nicer places to be, but common sense is frequently wrong, and cures can kill if they’re based on common sense rather than evidence. Trying to educate drivers could make the roads safer and nicer. It could be entirely ineffective. Or it could make them more dangerous and less pleasant. Until we conduct a controlled trial, we don’t know which.

There are two systematic reviews from the Cochrane Collaboration looking at the effectiveness of driver education programmes.  Cochrane reviews are, remember, the independent synthesis of everything that we know about a particular intervention, and are considered by doctors to be the closest thing we can ever get to fact.

The first Cochrane Review looks at the effectiveness of driver education in existing drivers. The schemes that have been trialled particularly focus on advanced driver training — the sort of programme that is designed to improve hazard detection and reduce error making, and which is frequently recommended for professional drivers — and on the remedial programmes that are increasingly offered to drivers who break the rules as an alternative to a driving ban.  These are lessons and lectures rather than marketing campaigns, but the remedial programmes — lectures on why speed limits matter — are particularly relevant to the “be nice” approach to making our streets nicer places where people feel able to ride bicycles.

The review found 24 trials from 1962 to 2002, all in the US except for one in Sweden, with more than 300,000 participants between them.  With those sorts of numbers, there is little chance of the review accidentally getting a false result.  Four were for advanced driving courses, the rest for remedial classes.  The programmes ranged from the simple supply of written material (9 trials) — a letter and copy of the rule book — through group lectures (16 trials) to proper one-on-one classes (7 trials), but all were designed to improve “driver performance and safety”.

The trials typically checked up on participants two years later and compared the rate of rule breaking and/or the rate of crashes in those who received the education programme and the controls who did not.  There was no difference. The education programmes didn’t stop drivers breaking the law or having crashes.  The authors concluded that companies shouldn’t bother with driving courses for their staff, but should let them take the train instead.

The evidence reviewed isn’t perfect. They could not, for example, blind participants as to whether they were in the study or control group. And the conclusions apply to the 32 specific advance driving courses and remedial classes that were trialled — we can not say for sure that other types of education campaign wouldn’t work. But the evidence tells us to at least be very wary of investing in any campaign strategy that relies on teaching people to play nice.

The second Cochrane review looks at the effectiveness of educating school kids before they start driving.  These are the sort of programmes that are supposed to address the fact that 17-21 year old drivers are twice as likely to crash as the average driver. They are particularly popular with the Road Safety industry and there are several varieties common in this country.  Indeed, I have first hand experience: it must have been during the final GCSE year, aged 15 or 16, that we were all taken to the Bovington tank training circuit to take it in turns driving hatchbacks (sadly no tanks) around the track, doing hill starts, three point turns, reverse parking, and, as a treat afterwards, emergency stops from 70mph. While not everybody is privileged enough to get real practical lessons, the government does at least make sure that kids are taught how to get a learner’s license and find an instructor, what tests they will need to take, and are given a few road safety messages.¹ *

The Cochrane review found three RCTs with a total of around 18,000 students. The review looked at the public health outcome of the trials, typically measured as the rate of crashes and/or violations in the first few years of holding a license. Giving school kids driving education did not reduce the incidence of crashes and violations.

Indeed, the authors, against common sense, found evidence of the opposite. The reason can be found in the other outcome that the trials measured: the time it took the kids from turning 17 (or whatever age was relevant in their particular locality) to passing their driving test (which the study gives the awful name “license delay”). Kids who were given driving classes at school were more likely to seek and obtain a license, and they did so earlier — and we already know that age correlates with crash rate and rule breaking (or at the very least, being caught and punished for rule breaking).  Driving classes in school weren’t making people drive safely, but they were making people drive.

You can see why driver education programmes are so popular with the road safety industry, puppet of the motoring lobby. The trials reviewed by Cochrane were all from the mid 1980s, yet we continue to put money and effort into programmes that are worse than useless. My own school driving lesson was fifteen years after school driving lessons were shown to be harmful to our health.

Whenever questioned, the government cites as justification its own non-controlled study which showed that kids are able to recall and are vaguely more likely to agree with specific road safety messages when asked three months after the lessons. No, really. That’s it.¹

So drivers can be taught. They can be taught, before they even become drivers, that driving is normal, just something that everybody does. The moment I turned 17 I wasted about a hundred quid on driving lessons before I stopped to ask myself why. Everybody was doing it, right? You do GCSEs at 16, driving at 17, ‘A’-levels at 18. That’s how it works.

Perhaps they can be taught to behave and we just haven’t worked out how yet. There are not, so far as I am aware, any trials on the effectiveness of making motorists try cycling on the roads. But I suspect even that would have limited effect, and maybe even that could backfire too.

Because people generally don’t do what they’re told to do, they do whatever looks normal and natural and easy. You can call that selfish and lazy if you like, but I don’t think that will help you understand or overcome the behaviour. In the UK it is normal and natural and easy to learn to drive and then drive badly. And people refuse to be taught that the things which are normal and natural and easy, the things that everybody around them is doing, are wrong. Experience trumps the word of others.

In the Netherlands, incidentally, cycling is normal and natural and, thanks to the infrastructure, easy. In the UK it’s none of those things. Make it easy and you’re nine tenths of the way to making it normal and natural.

Continue reading

That’s not what I said, say scientists

According to SCIENTISTS, “pollution is not improved by c-charge.”  (“Improved”? These scientists are so sloppy with their language.)

Journalists all over the city are this week reporting that the congestion charge has not reduced air pollution problems in central London, and that’s a fact, proven by science.  (As far as I know, the CCharge was never about air pollution — the clue’s in the name. But it’s potentially an interesting thing to look at all the same.  I can invent in my head plausible hypotheses for why it would improve air quality, and why it wouldn’t, but both would be useless without evidence either way.)

Unfortunately, I’m having a little trouble finding out who these so-called scientists quoted as the source for the claim are.  I asked scientists on twitter, but they couldn’t remember making the statement.

What I can easily find is a set of documents (none of them making the claim) reviewing work that explores a potential link between the CCharge and air pollution.  The documents are not new research published as peer reviewed articles in a scientific journal.  They are a “research report” — a King’s College academic’s review of what we know about the CCharge and air pollution — coupled with commentary and a press release.  The documents are all commissioned and published by the “Health Effects Institute“,

a nonprofit corporation chartered in 1980 as an independent research organization to provide high-quality, impartial, and relevant science on the health effects of air pollution. Typically, HEI receives half of its core funds from the US Environmental Protection Agency and half from the worldwide motor vehicle industry.

And that’s fine.  If the content is good, it doesn’t matter who funded it or where it was published.  I’m merely establishing exactly who is saying what.  The exact people are:

  • Professor Frank Kelly, an environmental health researcher specialising in air pollution, who (as leader of an independent group of scientists) wrote the comprehensive research report reviewing the evidence.
  • HEI’s Health Review Committee, who wrote a short commentary on Kelly’s research report.
  • HEI’s press office, who wrote the press release, which is the only thing that most journalists read.

The main line of research reviewed by Kelly looked at roadside and background levels of nitrogen oxides (NOx), carbon monoxide (CO) and small particulates (PM10).  The data compared the change (if any) in these pollutants at locations within the CCharge zone from a few years before implementation to a few years after implementation.  It did the same for control locations in London but outside of the CCharge zone, to account for any unrelated trends in air pollution.

Kelly’s report concluded that there was no evidence of a CCharge effect on roadside levels of NOx; a complicated effect on background levels of NOx (whereby one type was marginally reduced and another type increased, especially near the boundary of the zone); but a marginal reduction in carbon monoxide and a reduction in particulates becoming more pronounced the closer one gets to the CCharge zone.  So the overall conclusion is that there is a small amount of evidence to indicate that the CCharge has made a small reduction to air pollution (the exact opposite of the claim attributed to “scientists” in the headlines).  However, the data was extremely limited — in some cases to single data points — and Kelly’s report doesn’t put much weight on any of the conclusions.

Even where there is sufficient data, Kelly’s report indicates that there are limitations to what this kind of data can say about the CCharge effects.  The CCharge zone is very small, he points out, and our atmosphere somewhat fluid: the air in London blows around and mixes, so even with sufficient data, this study design is not an optimal way to answer questions about the CCharge.* **

All of these limitations in study design and data quantity are reflected in the Health Review Committee’s short commentary on the report:

Ultimately, the Review Committee concluded that the investigators, despite their considerable effort to study the impact of the London CCS, were unable to demonstrate a clear effect of the CCS either on individual air pollutant concentrations or on the oxidative potential of PM10. The investigators’ conclusion that the primary and exploratory analyses collectively indicate a weak effect of the CCS on air quality should be viewed cautiously. The results were not always consistent and the uncertainties surrounding them were not always clearly presented, making it difficult to reach definitive conclusions.

Which is to say: the research so far isn’t really capable of answering any questions satisfactorily.  While the evidence is for a small improvement in air quality thanks to the CCharge, none of the evidence is very good.  They go on to make the academic’s favourite conclusion: more research is necessary.

That’s right, this is a 121 page research review with associated commentary which simply concludes that the existing data is insufficient to tell us anything useful at all.  That’s no criticism of Kelly or the HEI.  They set out to review the evidence; the evidence just happens to be severely limited.

The Health Effects Institute decided to press release this.  “Study finds little evidence of air quality improvements from London congestion charging scheme,” the press release screams in bold caps.  “Pollution not improved by C-Charge,” says Londonist. Can you spot the difference between the HEI press release and the Londonist headline?

There is an old saying that absence of evidence is not evidence of absence.***  It’s a classic source of bad science and bad journalism, and in this case it nicely sums up what is wrong with the Londonist piece.  A review which actually found very weak evidence that the CCharge improved air quality is covered as a study which found hard proof of the exact opposite.

* Indeed, Boris Johnson would like to blame all of the city’s problems on clouds blowing in from the continent rather than the motor vehicles that account for most of it.

** I could add to this limitation the fact that the CCharge was not merely meant to cut car use within the zone: it was meant to fund a massive increase in bus frequencies, subsidise fares, and generally make buses and trains more inviting throughout London.  The effect of the CCharge on road traffic throughout the capital is complex, so it’s questionable whether the “control” sites can be said to be unaffected by the intervention.

*** Before someone points it out, yes I know it’s a bit more complicated than that, but in this case the saying applies nicely.

Would a helmet help if hit by a car?

This post is part of a series: it starts with the intro to the helmets issue, then the summary of the best evidence on helmets, then a quick diversion into how dangerous cycling is and an attempt to define terms. And there’s more…

Brake, the “Road Safety” charity, say yes:

Helmets are effective for cyclists of all ages, in crashes which do and do not involve another vehicle.

That matters, because if cycling safety is in the news, journalists will go to Brake for an easy quote.

The British Medical Association also say yes:

Helmets provide equal level of protection from cars (69%) compared to other causes (65%)

This is important, because the BMA is a highly trusted organisation with political influence, and their current policy is to endorse the criminalisation of riding a bicycle when not wearing a helmet.

Interestingly, president of the Automobile Association Edmund King, who was giving away free advertising bicycle helmets in London this week, disagrees with the nation’s medics on both issues:

We don’t think helmets should be compulsory but we think there are benefits… Our view is that helmets do not protect against cars but they may protect against some of the 2.2m potholes which often are the cause of smashes into the ground by cyclists.

Carlton Reid adds a little detail:

Most bicycle helmets are designed for falls to the ground from one metre at speeds of 12mph. They offer almost zero protection in collisions between bicycles and fast-moving cars.

The risk reduction provided by helmets in bicycle crashes that do and do not involve motor vehicles is one of the few sub-group analyses that was performed in the case-control studies that are covered by the Cochrane Review, and it’s no surprise that this is the source for the BMA’s claim. In bicycle hospitalisations that did not involve cars it reported nearly 70% fewer head injuries in the helmet wearers. In bicycle hospitalisations that did involve motor vehicles there were nearly 70% fewer head injuries in helmet wearers.  A helmet is equally effective at preventing head and brain injury in crashes with cars as in solo crashes.

What makes Edmund King and Carlton Reid think they know better than the nation’s medics and road safety campaigners?  Indeed, what makes them think that they can go around claiming the opposite of the cold hard corroborated stats of the Cochrane review?

Well actually, they’re not. Not quite. King and Reid are judging helmet efficacy by a slightly different metric to the Cochrane Review.  The Cochrane Review is the looking at the set of bicyclists who have had an accident of a severity that hospitalises but does not kill outright.  The review says nothing about deaths, for example, and as the Cochrane Review itself notes, more than 90% of cyclist deaths are caused by “collisions” involving moving motor vehicles (the same proportion is found again by a separate route in the TRL review and again in NYC).  But only 25% of hospitalisations were caused by motor vehicles.  And while Cochrane suggested a whopping 85% of head injury hospitalisations (which in turn account for around half of all cyclist hospitalisations) could be avoided by wearing a helmet, the TRL review of post-mortem reports found that only 10-16% of all cyclist deaths might have been avoided.  Hospitalisations, of the sort reviewed by Cochrane, are not representative of deaths.  Fall off your bicycle and you might get hurt.  Get hit by a car and you might die.

That’s because when you fall off your bicycle, chances are you are toppling over some way — precisely the sort of simple fall that a helmet is designed for, and the sort of fall that is least likely to cause life-threatening injury to any other part of the body.  When hit by a car the body might be crushed, or thrown up and around at speeds that helmets are not designed for, and so there are many more opportunities to suffer fatal trauma to other parts of the body.

(As an aside, Brake actually get this one the wrong way ’round:

Nearly 50% of cyclist admissions to hospital are for head and facial injuries, and the majority of cyclist deaths and injuries are a result of head injury.

TRL has the answer to this one: around three quarters of cyclist fatalities did indeed involve a serious head injury.  But only about a quarter involved only a serious head injury.  The rest also involved one or more additional life-threatening injury.  The Brake claim is at best misleading.)

This doesn’t mean that the BMA and Brake are all wrong* and King and Reid are completely correct.  A car at speed may be able to cause the sort of multiple trauma that merely falling over doesn’t, but that doesn’t mean that cars aren’t also capable of causing the sort of crashes that helmets are designed for, especially in low speed city traffic.

So Edmund King is wrong**.  But within the untruth he is communicating an important truth: cars are responsible for the most serious injuries and death, and helmets will rarely help in those cases.

Brake and the BMA are correct.  But their strictly truthful statements hide the crucial details, without which they are liable to seriously mislead.

* Indeed, they can’t be wrong.  You can provide a hypothesis for why helmets might be useless in crashes with cars, but no hypothesis can trump the real world stats that say helmets are useful in crashes with cars.

** Carlton Reid is not wrong, because he specified fast-moving cars.

What is a bicyclist?

This post is part of a series: it starts with the intro to the helmets issue, then the summary of the best evidence on helmets, then a quick diversion into how dangerous cycling is. And it won’t end here…

A good review of a medical intervention starts by explaining the population being studied.  The Cochrane review of helmets for preventing head injuries in bicyclists explains that its population is the set of bicyclists who sustained an injury that was sufficiently major to make them go to the ER for treatment (and not sufficient to kill them before they could seek treatment).

The review does not explain what they mean by a bicyclist.  (And since the original papers under review are closed-access, behind an extortionate paywall, we can’t know whether those do.)  Presumably they mean people riding a bicycle at the time that they sustained their injury.

Is that people riding their bicycle leisurely along a rail trail or towpath?

Is that people touring, head down into the wind in the deserted mountains?

Is that people racing in a peloton down the dual carriageway?

Is that kids in the BMX park?

Is that mountainbikers on the downhill courses?

Is it businessmen on their Bromptons riding through the stop-start city traffic?  Old ladies bouncing down cobbled streets on their step-through upright bikes?  Village vicars doing their rounds?

Mountainbikers, city commuters, and rail trail riders are very different people exposed to completely differently environments and risks — and who have very different helmet-wearing and hospitalisation rates.  Lumping them all together is like lumping mountain hikers, sprinters, traceurs, marathon runners, city pedestrians and country footpath strollers together under the heading “walkers”.  But lump them together is exactly what the studies in the Cochrane review do, comparing the rate of head injury (as a proportion of all injuries) in helmet wearers and non-helmet wearers, and applying the results to make the recommendation that everybody should be made to wear a helmet while riding a bicycle, whatever their style and situation.  You may as well recommend Formula 1 safety gear for the drive to the supermarket.

Perhaps helmets help prevent head injuries in all people who use bicycles.  Perhaps it helps mountainbikers more than tourists.  Perhaps it’s the other way around.  We don’t know.  We could know.  The researchers could have made sure to collect the data (perhaps the data is even already there, in the medical records) and then done sub-group analyses on their data to give individual results for separate groups of bicyclists.  But they didn’t.  Why not?  Did it just not occur to them that “bicycling” might not be a single pursuit?  Or did they just assume that it didn’t matter, or that nobody would notice?  Either way, it amounts to a pretty serious limitation when you’re asking “should we legislate to ban all kinds of bicycle use except where the bicycle user is wearing a helmet?”

Headline figures

If you haven’t done so already, start from this post and work your way forward.

In rare events like bicyclist injuries, odds ratios can be used as an approximation of relative risk: that is, how much a medical intervention changes the risk of a specific outcome.  An odds ratio of 0.3 is interpreted as a 70% reduction in risk of head injury when wearing a bicycle helmet.

The Cochrane Review looked at five studies, which contained a number of sub-analyses.  There was actually a range of odds ratios found when looking at different types of injury in different groups of cyclists.  In one, an odds ratio of 0.15 was reported.

So now the headline figure is that bicycle helmets protect against a whopping 85% of injuries.  Imagine the lives that could be saved.  Won’t somebody think of the children?  The 85% figure is constantly repeated by “Road Safety” spokesmen, and reported without context by journalists.  It’s cited by the British Medical Association in support for banning people from riding bicycles except when wearing helmets.  The 85% figure matters.

Leaving aside questions over whether the 85% figure represents the real efficacy of helmets, how useful is it as a guide for how to live our lives?  Well, as Ben Goldacre puts it: “you are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day.”  Nobody has ever died from a meteor falling on their head*.

What Ben is saying is that relative risk is only a useful number to communicate to the public if you also communicate the absolute risk.  If you want to know whether it’s worth acting to reduce the risk of something bad happening, you need to know how likely it is to happen in the first place.

In the UK, 104 cyclists died on the roads in 2009, according to DfT stats.  It was 115 in 2008, but the trend has been downwards for a long time.  For simplicity, lets say that in future years we could expect on average 100 cyclist deaths per year.  It’s really difficult to say how many cyclists there are in the UK, because you can define it in several different ways, and even then the data that we have is crap.  You can estimate how many bicycles there are, but these estimates vary, many bicycles might be out of use, and many of us own more than one.  You can take daily commuter modal share — which would give us 1 million cyclists — but there’s more to using a bicycle than commuting, and most people mix and match their modes.  According to the latest National Travel Survey, 14% of people use a bicycle for transport at least once per week.  An additional 4% cycle several times a month, and 4% again cycle at least once a month.  Cumulatively, 32% of the British people cycle at least sometimes, but some of those are too infrequent to be worth counting.  To be generous, and to keep the numbers simple, I’ll round it down to 16%, giving us 10 million on-road cyclists in the UK.  That means one in 100,000 cyclists is killed in cycling incident each year.

To put it another way, there’s a good chance you’ll get killed if you carry on cycling right up to your 100,000th birthday.  (If you do not first die in the inferno caused by the candles on the cake.)  Or, if when Adam and Eve first left Africa 200,000 years ago they had done so on bicycles, there is a good chance that at least one of them would be dead by now.  Alternatively, if you accept that life expectancy is around 80-90, make the unlikely assumption that all cyclists remain cyclists pretty much from cradle to grave, you might die cycling once in over a thousand lifetimes.  Nine-hundred and ninety-nine lifetimes in a thousand, you will die of something much more probable.  Like heart disease, or cancer.

But not everybody who dies on a bicycle dies of head injuries, and not everybody who dies of head injuries sustained while riding a bicycle would be helped by wearing a helmet.  The DfT/Transport Research Laboratory have done their own extensive review of the medical literature on helmets and say: “A forensic case by case review of over 100 British police cyclist fatality reports highlighted that between 10 and 16% of the fatalities reviewed  could have been prevented if they had worn an appropriate cycle helmet.”  This is because, while some form of head injury was involved in over half of cyclist fatalities, the head injury was usually in combination with one or more serious injury elsewhere on the body; and even in those where only the head sustained a serious injury, as often than not, it was of a type or severity that a helmet could not prevent.  There are, of course, many caveats and limitations of such an estimation, which relied on many assumptions, some amount of subjective judgement, and a limited dataset which was biased to the sort of cyclist fatalities that the police are interested in.  So we could be generous and round it up to 20% — that helps keep our numbers simple.

So we’re talking about about 20 lives saved per year, or in terms that matter to you, your life saved if you cycled for half a million years. Of course, a third of British cyclists already wear helmets, so we can add the number of cyclists whose lives are already being saved.  We could be generous again and say 40 lives per year.

That would give you a chance of less than 1 in 2,500 that, as a cradle-to-grave bicycle user, bicycling from nursery school to nursing home, you will die in a crash that a helmet would have protected against.  The chances are 2,499 in 2,500 that you will die of something else.  Like the 4 in 2,500 chances of being killed in a cycling incident where a helmet would not have helped.

Or the 6 in 2,500 chances of death by falling down stairs.  Or the 3 in 2,500 of being run over by a drunk driver.  Or the whopping 30 in 2,500 chances of dying of an air pollution related respiratory disease.**  Unfortunately I couldn’t find the British Medical Association’s policy on legal compulsion for users of stairs to wear appropriate personal protective equipment.

Of course, in addition to the 100 cyclists killed on British roads each year, another 1,000 suffer serious but non-fatal head injury, sometimes involving permanent and life-changing brain damage (as do users of stairs).  The Cochrane Review says that up to 850 of those injuries would be avoided or less severe if a helmet were worn; the more pessimistic TRL review says that perhaps 200 of them might be prevented or mitigated by an appropriate helmet.  Either way, we’re still in the area of many thousands of years spent cycling.

Whether you think those numbers make helmets worthwhile  it is up to you — I don’t think these numbers alone objectively prove that helmets are or are not worth using.  Just don’t be fooled by the stark headline-grabbing figure of 85% risk reduction.  When the absolute risk to begin with is smaller than that for fatally falling down the stairs, and a fraction of one percent of that for cancer and heart disease, consider whether risk reduction matters.

Of course, that might all change once we’ve looked at the next part of the story…

* I have not checked this fact, which I just made up, but I would be surprised to hear that it is not true.

** Hastily googled and calculated headline figures for illustrative purposes only; again, I have not thoroughly assessed these.

Final disclaimer: this is a hastily scribbled blog post, not an academic paper.  I’ve checked my zeroes and decimal places, but if I’ve overlooked something or accidentally written something to the wrong order of magnitude, please do point it out.

So what’s the best evidence we have on bicycle helmets?

According to the Cochrane Collaboration — the source that most doctors will go to for their summary of the evidence — it is five studies from the 1980s and 1990s.

The Cochrane Review set out to answer a very specific question: “in the set of people who sought Emergency Room treatment having had a bicycle crash, did wearing a bicycle helmet correlate with the rate of head and brain injuries among the patients?”  These are important details — the question was not “in the entire set of people who ride bicycles, does wearing a bicycle helmet affect mortality, life expectancy, the rate of serious injury, or injury recovery?”  It’s not a bad question that the researchers are asking, but it is a very limited question — the data is restricted to the type of injury that is serious enough to send people to hospital, but not serious enough to kill outright; it does not ask whether helmets correlate with any other types of injury beyond head and brain (more later); and it can say nothing about whether helmet wearers and non-helmet wearers differ in their behaviour or exposure to risk of the type of accident that sends them to hospital in the first place.  The latter possibility turns out to be a very interesting one, which will be explored later.

The Cochrane Review searched the existing medical literature for high quality studies that were theoretically capable of answering their question.  There are several different ways that one could design a study to answer a question like the Cochrane question — some methods more reliable than others.  The Cochrane Review found five studies described in seven papers, all with the same design: case-control studies.  This study design looks at a set of people who have been hospitalised with head injuries while riding a bicycle and examines their records to find out whether whether more or fewer of them were wearing a helmet than a similar set of cyclists who were hospitalised at roughly the same time and place but whose injuries were not head injuries.

Case-control studies have a number of limitations that make them less reliable than other study designs, like the gold-standard randomised controlled trial design.  Principally, the study must merely make the assumption that the “case” population and “control” populations are essentially the same, differing only in the intervention tested (helmets) and potentially in the outcome of interest (head injury as a proportion of all injuries).  The method accepts that there may be other differences between the populations of patients (are helmet wearers on average richer, middle class, more likely to have health insurance, I wonder?), but makes the assumption that those differences are not important to the question being addressed, and so uses statistical methods to attempt to minimise their effect on the results.  For this reason, case control studies are considered to be relatively weak evidence, and when more rigorous trials are conducted, they often find that case control studies exaggerate the effects of interventions.  A good Cochrane review will carefully pick the studies that it includes, eliminating case control studies unless they do everything possible to minimise the limitations of the design, and this review appears to have done that.

The populations in the five studies reviewed were 1040 cyclists hospitalised in Cambridge UK in 1992; 1710 cyclists in Melbourne in the late 1980s; 445 child cyclists in Brisbane in the early 1990s; 668 cyclists in Seattle in the late 1980s; and a further 3390 in Seattle in the early 1990s.  The results therefore apply to the shape, style and construction of the helmets that were on the market in the mid-1980s to early 1990s, and to the types of people who were choosing to wear helmets at that time.  (The Seattle study, completed in 1994, does look specifically at”hard shell”, “soft shell” and “no shell” helmets, finding the same result for all three).  Note that the Cochrane review was assessed as “up to date” in 2006, meaning that the authors do not believe that there is any good quality data newer than the early 1990s.  I’ll let you decide whether these studies are relevant to your own 2010-model helmet or not.

The outcome of the case control study is the odds ratio — a measure of the strength of association between the intervention and the outcome, i.e., how big an affect the intervention appears to be having, and whether it appears to be helping or harming.  It’s literally the ratio of the odds of head injury in hospitalised helmet wearers to the odds of head injury in hospitalised non-helmet wearers.  So an OR of 1 would mean that the odds of head injury were equal, while an OR higher than 1 would mean that hospitalised helmet wearers had a higher rate of head injury than hospitalised non-helmet wearers and an OR lower than 1 would mean that helmet wearers had the lower rate of head injury.

The five studies under review all agreed on odds ratios in the region of 0.3, meaning that hospitalised helmet wearers had considerably fewer head and brain injuries than hospitalised non-helmet wearers.  It’s a significant result.  Not something that often happens by chance — especially repeated in five different studies.  In the set of cyclists who turned up at the Emergency Room, there was a strong correlation between whether one wore a helmet and whether one had a head injury.

That, according to the Cochrane Collaboration, is the best evidence that we have on bicycle helmets.  In the population of hospitalised cyclists in four cities in the late 1980s and early 1990s, there was a significantly higher rate of head and brain injury in those who were not wearing a helmet.  Nothing about mortality or life expectancy.  Nothing about injury recovery.  Nothing about injury and hospitalisation rates in the whole population of cyclists.  That’s not a criticism of the Cochrane Collaboration or it’s review: they are reviewing the best evidence we have.

Evidence that is, apparently, sufficient for the British Medical Association to campaign for compulsory use of a medical intervention.

Those are just the obvious limitations of the question being asked and the study design used to answer it.  The less obvious limitations are where it really gets interesting.