Tag Archives: bad science

Repost: Exclusive: things tend to have a greater effect on the world once they exist

So the Institute of Advanced Motorists have press released the fact that casualties are up on 20mph streets (deaths are down, but they were already in single figures, so that’s random). I thought it might be worth reposting this sarcastic rubbish that I bashed out last time some idiot tried to claim that an increase in casualties on 20mph roads is evidence of their failure.

I heard on the lunchtime news on Radio 4 today the shocking news of an increase in the number of people injured on 20mph streets. Back when there were fewer 20mph streets, fewer people were injured on 20mph streets, they revealed. Now that there are more 20mph streets, more people are being injured on 20mph streets. This road safety intervention, they concluded, isn’t working.

This watertight logic perhaps also explains why BBC News have been so quiet on the destruction of the NHS. Before the NHS existed, literally nobody at all died in any of the then non-existent NHS hospitals. Almost as soon as the NHS was created, people started dying in the newly created NHS hospitals. Clearly the NHS doesn’t work.

Members of the Association of British Nutters will no doubt be getting very excited about these numbers, but before they make rash recommendations they should remember that back before the British motorway network was built, there were literally no people injured on the British motorway network, whereas now that the British motorway network exists, there are lots.

I hope that the main elements of the astonishing innumeracy that went into the BBC story — the failure to put the raw numbers into any kind of useful context, either of the rapid growth in the number of streets with 20mph limits as it has become easier to set the limit (or their changing nature as 20mph starts to roll out beyond quiet residential streets onto busier high streets), or of the far higher number (and, more importantly, rate) of injuries and death on either equivalent 30mph streets or on the same 20mph streets before the speed was lowered — should be obvious. Needless to say, reducing speeds on a street from 30mph to 20mph cuts injuries, regardless of the entirely banal fact that those few injuries which remain will thenceforth be added to the tally for 20mph streets instead of that for 30mph.

So, mockery over,  there’s a more important point: should an increase in injuries, if there really had been one, automatically kill off further roll out of 20mph zones?

Those who dwell at the bottom of Bristol’s Evening Post presumably think so

It beggars belief that the council intend reducing the 30mph speed limit. A limit introduced when there was no such thing as MoT’s, ABS brakes, crash zones on the front of cars and good street lighting.

I can see no justification in spending this money and would dearly love to know who Bristol City Council think it will benefit? It certainly won’t be the youth, disabled or elderly.

James R Sawyer clearly thinks that the 20 zones must be all about safety, as he argues that his ABS brakes and crash zones are already plenty enough to keep him safe as he drives through Bristol at 30. But Bristol have always been clearabout why they’re moving towards a 20mph city:

Councillor Jon Rogers, Cabinet Member for Care and Health, said: “…20 mph zones create cleaner, safer, friendlier neighbourhoods for cyclists and pedestrians. They are popular with residents, as slower traffic speeds mean children can play more safely and all residents can enjoy calmer environment.”

Slower speeds are not a simple issue of cutting crude injury statistics. They’re more about reviving communities which have been spoiled and severed by traffic speeding through them, reclaiming a little bit of the public realm that has been monopolised by the motorcar, and enabling liveable walkable neighbourhoods to thrive. Far from “certainly no benefit for the youth, disabled or elderly”, we know much — some of the research having in fact been carried out in Bristol itself — about the many adverse effects of higher speeds and volumes of traffic, and the loss of shops and services due to car-centric planning and living and the blight of high streets by arterial traffic, on the mobility of those most excluded from the car addicted society, particularly the young, the elderly, and the disabled. If they’re lucky, these people will be forced into dependency on those willing to help them get around; if they’re unlucky, they will simply be left isolated and severely disadvantaged. But of course, we don’t like to acknowledge the existence of the large numbers of people who are excluded from much of our society, culture and economy by our rebuilding the world with nobody in mind except car owners.

The injury statistics cited in the BBC News piece include minor injuries, which is most injuries at slow speeds — little things which don’t require a hospital stay. What are a few more cuts and bruises if it means that thousands of kids are free to walk to school with their friends instead of stuck inside mum’s car? Would we rather keep the infirm all shut up and sedentary with no access to the shops and the services they need, too intimidated by the anti-social behaviour of motorists to cross the road, than risk one person having a fall?

These strands can be tied together by the other piece of context that would have been worth including in the BBC piece: in the same year that injuries in 20mph zones increased, injuries to pedestrians and cyclists in general increased — in part because there are more to be injured. It has always been the case that the great road safety gains that successive governments have boasted of have been won mainly by making streets so dreadful that people find them too frightening, stressful, unpleasant, humiliating or ineffective to walk, cycle, or do anything other than sit in a secure metal box on. Start making the streets a little bit less awful and people return to them.

“The overall results show that ‘signs only’ 20mph has been accompanied by a small but important reduction in daytime vehicle speeds, an increase in walking and cycling counts, especially at weekends, a strengthening of public support for 20mph, maintenance of bus journey times and reliability, and no measurable impact on air quality or noise.”

Like cycle tracks, which people still like to claim increase car-cycle collisions (they don’t) despite before-and-after studies largely ignoring the fact that the point of cycle tracks is to widen bicycle use from the confident and quick witted to the people who were are otherwise too scared, stressed or infirm to do so, so invalidating the before-and-after study design, an increase in minor injuries after speed limit reduction, even if it were really to happen, would be far from proof of a failure.

Postscript, July 2014

The IAM make a thing of the DfT stats showing a 26% increase in serious injuries in 20mph limits and a 9% decrease in 30mph limits. Given that the base figures for the two sets are so different, that amounts to 87 more injuries in 20 zones and 1102 fewer injuries in 30 zones. Of course, the only figures that would really matter (in the absence of a double blind randomised controlled trial) are before/after comparisons of the streets that have switched and/or case-control studies of those streets (at least, for measuring injuries; as I said before, there are other important outcomes to 20 zones besides injury rates). And given that these numbers are not (and could not really be) normalised to the changes in total length of the two types of street, and are influenced by far too many confounding variables, I wouldn’t dream of suggesting that they’re worth drawing any conclusion from. But if you’re intent on drawing a conclusion, given the trend in switching 30mph streets to 20mph streets, a net reduction in serious injuries of 1015 seems like a far more pertinent one than a 26% increase in injuries on 20mph streets.

Advertisements

Exclusive: things tend to have a greater effect on the world once they exist

I heard on the lunchtime news on Radio 4 today the shocking news of an increase in the number of people injured on 20mph streets. Back when there were fewer 20mph streets, fewer people were injured on 20mph streets, they revealed. Now that there are more 20mph streets, more people are being injured on 20mph streets. This road safety intervention, they concluded, isn’t working.

This watertight logic perhaps also explains why BBC News have been so quiet on the destruction of the NHS. Before the NHS existed, literally nobody at all died in any of the then non-existent NHS hospitals. Almost as soon as the NHS was created, people started dying in the newly created NHS hospitals. Clearly the NHS doesn’t work.

Members of the Association of British Nutters will no doubt be getting very excited about these numbers, but before they make rash recommendations they should remember that back before the British motorway network was built, there were literally no people injured on the British motorway network, whereas now that the British motorway network exists, there are lots.

I hope that the main elements of the astonishing innumeracy that went into the BBC story — the failure to put the raw numbers into any kind of useful context, either of the rapid growth in the number of streets with 20mph limits as it has become easier to set the limit (or their changing nature as 20mph starts to roll out beyond quiet residential streets onto busier high streets), or of the far higher number (and, more importantly, rate) of injuries and death on either equivalent 30mph streets or on the same 20mph streets before and after the speed was lowered — should be obvious. Needless to say, reducing speeds on a street from 30mph to 20mph cuts injuries, regardless of the entirely banal fact that those few injuries which remain will thenceforth be added to the tally for 20mph streets instead of that for 30mph.

So, mockery over,  there’s a more important point: should an increase in injuries, if there really had been one, automatically kill off further roll out of 20mph zones?

Those who dwell at the bottom of Bristol’s Evening Post presumably think so

It beggars belief that the council intend reducing the 30mph speed limit. A limit introduced when there was no such thing as MoT’s, ABS brakes, crash zones on the front of cars and good street lighting.

I can see no justification in spending this money and would dearly love to know who Bristol City Council think it will benefit? It certainly won’t be the youth, disabled or elderly.

James R Sawyer clearly thinks that the 20 zones must be all about safety, as he argues that his ABS brakes and crash zones are already plenty enough to keep him safe as he drives through Bristol at 30. But Bristol have always been clear about why they’re moving towards a 20mph city:

Councillor Jon Rogers, Cabinet Member for Care and Health, said: “…20 mph zones create cleaner, safer, friendlier neighbourhoods for cyclists and pedestrians. They are popular with residents, as slower traffic speeds mean children can play more safely and all residents can enjoy calmer environment.”

Slower speeds are not a simple issue of cutting crude injury statistics. They’re more about reviving communities which have been spoiled and severed by traffic speeding through them, reclaiming a little bit of the public realm that has been monopolised by the motorcar, and enabling liveable walkable neighbourhoods to thrive. Far from “certainly no benefit for the youth, disabled or elderly”, we know much — some of the research having in fact been carried out in Bristol itself — about the many adverse effects of higher speeds and volumes of traffic, and the loss of shops and services due to car-centric planning and living and the blight of high streets by arterial traffic, on the mobility of those most excluded from the car addicted society, particularly the young, the elderly, and the disabled. If they’re lucky, these people will be forced into dependency on those willing to help them get around; if they’re unlucky, they will simply be left isolated and severely disadvantaged. But of course, we don’t like to acknowledge the existence of the large numbers of people who are excluded from much of our society, culture and economy by our rebuilding the world with nobody in mind except car owners.

The injury statistics cited in the BBC News piece include minor injuries, which is most injuries at slow speeds — little things which don’t require a hospital stay. What are a few more cuts and bruises if it means that thousands of kids are free to walk to school with their friends instead of stuck inside mum’s car? Would we rather keep the infirm all shut up and sedentary with no access to the shops and the services they need, too intimidated by the anti-social behaviour of motorists to cross the road, than risk one person having a fall?

These strands can be tied together by the other piece of context that would have been worth including in the BBC piece: in the same year that injuries in 20mph zones increased, injuries to pedestrians and cyclists in general increased — in part because there are more to be injured. It has always been the case that the great road safety gains that successive governments have boasted of have been won mainly by making streets so dreadful that people find them too frightening, stressful, unpleasant, humiliating or ineffective to walk, cycle, or do anything other than sit in a secure metal box on. Start making the streets a little bit less awful and people return to them.

“The overall results show that ‘signs only’ 20mph has been accompanied by a small but important reduction in daytime vehicle speeds, an increase in walking and cycling counts, especially at weekends, a strengthening of public support for 20mph, maintenance of bus journey times and reliability, and no measurable impact on air quality or noise.”

Like cycle tracks, which people still like to claim increase car-cycle collisions (they don’t) despite before-and-after studies largely ignoring the fact that the point of cycle tracks is to widen bicycle use from the confident and quick witted to the people who were are otherwise too scared, stressed or insecure to do so, so invalidating the before-and-after study design, an increase in minor injuries after speed limit reduction, even if it were really to happen, would be far from proof of a failure.

DafT’s deeply regressive fantasy formula

Flicking through Google Reader, catching up, something caught my eye in George Monbiot’s latest:

Cost-benefit analysis is systematically rigged in favour of business. Take, for example, the decision-making process for transport infrastructure. The last government developed an appraisal method which almost guaranteed that new roads, railways and runways would be built, regardless of the damage they might do or the paltry benefits they might deliver(8). The method costs people’s time according to how much they earn, and uses this cost to create a value for the development. So, for example, it says the market price of an hour spent travelling in a taxi is £45, but the price of an hour spent travelling by bicycle is just £17, because cyclists tend to be poorer than taxi passengers(9).

I was vaguely aware that the government had complicated infrastructure cost-benefit formulae that included attempts to put value on people’s time, but I wasn’t aware that they had gone so far as to value the time of cyclists versus the time of taxi passengers.  So I followed the reference #9.  I’ve read some absurd documents in my time, but I wasn’t quite prepared for this.

When deciding whether a transport project — a road “improvement”, a high-speed railway, a bicycle path — is value for money, the Department for Transport consider the value of the time that users of the new infrastructure will save.  When deciding how much time to give each phase of the traffic lights, or whether a bicycle lane gets priority over road, transport agencies will consider the value of the time of the competing users.  In particular, DafT are interested in the value of the time that employers will save, because your time doesn’t matter, but the time you waste at work does.

How does DafT determine the value of an hour that an employee spends cycling for work, compared to one that the employee spends on a train or behind a steering wheel?  All it needs to know is the average hourly wage of employees using each mode.

It uses the 1999-2001 National Travel Survey.  Specifically, there is a dataset that counts all journeys made by mode and by income band, so we know whether the rich and the poor are more or less likely to drive, cycle, ride the bus, etc.  If a mode is over-represented amongst the rich and under-represented amongst the poor then DafT consider the time of users of that mode to be more valuable.

No, really.

If you can’t spot what’s wrong with this, and why I am cringing, I don’t know where to start.

Perhaps with the fact that we’re trying to derive the value of time “wasted” on travel-for-work from a dataset of travel modes that the rich and poor use outside of work?  A dataset, indeed, that includes students and the unemployed.

Or the fact that we’re specifying the average value of an hour of time to two decimal places, despite the vast range of values being averaged.

Or with the idea of specifying the value of one Great British hour, despite the massive variation in income and modal share between cities and regions?

Or the assumption that time spent travelling is always time wasted (my favourite office is a good long off-peak and under-crowded train ride with a netbook and an android), and that time “saved” by faster journeys is converted into economically productive time?  (Rather than, e.g., additional journeys.)

Maybe we should go back to the silly assumption that the work of the highest paid is the most economically important?

Or perhaps point out the critical fact that the demographics of users of a transport mode prior to investment do not necessarily reflect the demographics of users after investment, and that investment which enables modal-shift can “save” time too.

Or question what DafT are doing using inflation-adjusted figures derived from a decade-old version of the National Travel Survey when there is not one but eight more recent datasets?  In 1999-2001, the railways, the buses, and cycling were all at their very lowest ebb.  Were DafT to use the latest numbers, the value of an hour on a bicycle would be considerably higher.¹

The whole thing is absurd.  According to DafT, if I take a taxi to meet my client, there is more value in their giving me a faster taxi ride than in enabling me to get the meeting even quicker by bicycle.

This is cargo cult mathematics and cargo cult economics.  These numbers — given on the DafT website down to the exact pennies-per-hour — are pure fantasy.  I am actually embarrassed for the department that they are not only using these numbers, but are proudly publicising the pioneering way that they have been derived.  Were it not for the fact that transport policy and funding is such an unsexy topic, the press would be in gales of laughter at this nonsense.

And I would be gales in laughter were it not for the fact that, as I understand it, this crap is the foundation of a deeply regressive and damaging political programme.  When modelling the impact and benefits of investment in a transport intervention, the Department factors in this hypothetical “value of time” of users.

That is, because a taxi passenger is more likely to be very highly paid than a cyclist, when all other variables are equal the department should invest in schemes that favour taxis ahead of schemes that favour cyclists.  Aberdeen Cars is right: traffic light timings remind the economically-inactive cyclist that she does not matter.  Stabiliser can find some answers in this policy.

It becomes a positive feedback loop.  Invest in trains ahead of buses and the rich will use the trains while the poor ride the bus.

Imagine this happening at the Department for Health.²  Should we invest in cutting the waiting lists for lung cancer surgery or prostate cancer surgery?  Well, we value the time of the average prostate cancer patient more…

This is more than absurd.  This is a fraud.  This is a crude imitation of science and statistics being employed to disguise political decisions — to invest in transport for the rich and not for the poor — as pure objective economics.

1. Here  is the 2009 NTS.  The dataset we want is NTS0705.  Do the maths properly if you like, but even a glance at the journeys-by-mode-by-income table and it’s instantly obvious that cycling has flipped from being highly under-represented in the richer categories in 2001 to being very slightly over-represented in 2009.

2. I mean, imagine it happening this blatantly.  I would be surprised if there were not many many ways that state healthcare is subtly weighted in favour of the rich, whether designed and deliberate or not.

As usual, this is hastily bashed out heat-of-the-moment blog post, not a careful scholarly article.  The thesis I am certain of, but the details are always open to amusing malapropisms and embarrassing subtle errors in calculations.  If you are distracted by them, point them out and I will fix it.

That’s not what I said, say scientists

According to SCIENTISTS, “pollution is not improved by c-charge.”  (“Improved”? These scientists are so sloppy with their language.)

Journalists all over the city are this week reporting that the congestion charge has not reduced air pollution problems in central London, and that’s a fact, proven by science.  (As far as I know, the CCharge was never about air pollution — the clue’s in the name. But it’s potentially an interesting thing to look at all the same.  I can invent in my head plausible hypotheses for why it would improve air quality, and why it wouldn’t, but both would be useless without evidence either way.)

Unfortunately, I’m having a little trouble finding out who these so-called scientists quoted as the source for the claim are.  I asked scientists on twitter, but they couldn’t remember making the statement.

What I can easily find is a set of documents (none of them making the claim) reviewing work that explores a potential link between the CCharge and air pollution.  The documents are not new research published as peer reviewed articles in a scientific journal.  They are a “research report” — a King’s College academic’s review of what we know about the CCharge and air pollution — coupled with commentary and a press release.  The documents are all commissioned and published by the “Health Effects Institute“,

a nonprofit corporation chartered in 1980 as an independent research organization to provide high-quality, impartial, and relevant science on the health effects of air pollution. Typically, HEI receives half of its core funds from the US Environmental Protection Agency and half from the worldwide motor vehicle industry.

And that’s fine.  If the content is good, it doesn’t matter who funded it or where it was published.  I’m merely establishing exactly who is saying what.  The exact people are:

  • Professor Frank Kelly, an environmental health researcher specialising in air pollution, who (as leader of an independent group of scientists) wrote the comprehensive research report reviewing the evidence.
  • HEI’s Health Review Committee, who wrote a short commentary on Kelly’s research report.
  • HEI’s press office, who wrote the press release, which is the only thing that most journalists read.

The main line of research reviewed by Kelly looked at roadside and background levels of nitrogen oxides (NOx), carbon monoxide (CO) and small particulates (PM10).  The data compared the change (if any) in these pollutants at locations within the CCharge zone from a few years before implementation to a few years after implementation.  It did the same for control locations in London but outside of the CCharge zone, to account for any unrelated trends in air pollution.

Kelly’s report concluded that there was no evidence of a CCharge effect on roadside levels of NOx; a complicated effect on background levels of NOx (whereby one type was marginally reduced and another type increased, especially near the boundary of the zone); but a marginal reduction in carbon monoxide and a reduction in particulates becoming more pronounced the closer one gets to the CCharge zone.  So the overall conclusion is that there is a small amount of evidence to indicate that the CCharge has made a small reduction to air pollution (the exact opposite of the claim attributed to “scientists” in the headlines).  However, the data was extremely limited — in some cases to single data points — and Kelly’s report doesn’t put much weight on any of the conclusions.

Even where there is sufficient data, Kelly’s report indicates that there are limitations to what this kind of data can say about the CCharge effects.  The CCharge zone is very small, he points out, and our atmosphere somewhat fluid: the air in London blows around and mixes, so even with sufficient data, this study design is not an optimal way to answer questions about the CCharge.* **

All of these limitations in study design and data quantity are reflected in the Health Review Committee’s short commentary on the report:

Ultimately, the Review Committee concluded that the investigators, despite their considerable effort to study the impact of the London CCS, were unable to demonstrate a clear effect of the CCS either on individual air pollutant concentrations or on the oxidative potential of PM10. The investigators’ conclusion that the primary and exploratory analyses collectively indicate a weak effect of the CCS on air quality should be viewed cautiously. The results were not always consistent and the uncertainties surrounding them were not always clearly presented, making it difficult to reach definitive conclusions.

Which is to say: the research so far isn’t really capable of answering any questions satisfactorily.  While the evidence is for a small improvement in air quality thanks to the CCharge, none of the evidence is very good.  They go on to make the academic’s favourite conclusion: more research is necessary.

That’s right, this is a 121 page research review with associated commentary which simply concludes that the existing data is insufficient to tell us anything useful at all.  That’s no criticism of Kelly or the HEI.  They set out to review the evidence; the evidence just happens to be severely limited.

The Health Effects Institute decided to press release this.  “Study finds little evidence of air quality improvements from London congestion charging scheme,” the press release screams in bold caps.  “Pollution not improved by C-Charge,” says Londonist. Can you spot the difference between the HEI press release and the Londonist headline?

There is an old saying that absence of evidence is not evidence of absence.***  It’s a classic source of bad science and bad journalism, and in this case it nicely sums up what is wrong with the Londonist piece.  A review which actually found very weak evidence that the CCharge improved air quality is covered as a study which found hard proof of the exact opposite.

* Indeed, Boris Johnson would like to blame all of the city’s problems on clouds blowing in from the continent rather than the motor vehicles that account for most of it.

** I could add to this limitation the fact that the CCharge was not merely meant to cut car use within the zone: it was meant to fund a massive increase in bus frequencies, subsidise fares, and generally make buses and trains more inviting throughout London.  The effect of the CCharge on road traffic throughout the capital is complex, so it’s questionable whether the “control” sites can be said to be unaffected by the intervention.

*** Before someone points it out, yes I know it’s a bit more complicated than that, but in this case the saying applies nicely.

What is a bicyclist?

This post is part of a series: it starts with the intro to the helmets issue, then the summary of the best evidence on helmets, then a quick diversion into how dangerous cycling is. And it won’t end here…

A good review of a medical intervention starts by explaining the population being studied.  The Cochrane review of helmets for preventing head injuries in bicyclists explains that its population is the set of bicyclists who sustained an injury that was sufficiently major to make them go to the ER for treatment (and not sufficient to kill them before they could seek treatment).

The review does not explain what they mean by a bicyclist.  (And since the original papers under review are closed-access, behind an extortionate paywall, we can’t know whether those do.)  Presumably they mean people riding a bicycle at the time that they sustained their injury.

Is that people riding their bicycle leisurely along a rail trail or towpath?

Is that people touring, head down into the wind in the deserted mountains?

Is that people racing in a peloton down the dual carriageway?

Is that kids in the BMX park?

Is that mountainbikers on the downhill courses?

Is it businessmen on their Bromptons riding through the stop-start city traffic?  Old ladies bouncing down cobbled streets on their step-through upright bikes?  Village vicars doing their rounds?

Mountainbikers, city commuters, and rail trail riders are very different people exposed to completely differently environments and risks — and who have very different helmet-wearing and hospitalisation rates.  Lumping them all together is like lumping mountain hikers, sprinters, traceurs, marathon runners, city pedestrians and country footpath strollers together under the heading “walkers”.  But lump them together is exactly what the studies in the Cochrane review do, comparing the rate of head injury (as a proportion of all injuries) in helmet wearers and non-helmet wearers, and applying the results to make the recommendation that everybody should be made to wear a helmet while riding a bicycle, whatever their style and situation.  You may as well recommend Formula 1 safety gear for the drive to the supermarket.

Perhaps helmets help prevent head injuries in all people who use bicycles.  Perhaps it helps mountainbikers more than tourists.  Perhaps it’s the other way around.  We don’t know.  We could know.  The researchers could have made sure to collect the data (perhaps the data is even already there, in the medical records) and then done sub-group analyses on their data to give individual results for separate groups of bicyclists.  But they didn’t.  Why not?  Did it just not occur to them that “bicycling” might not be a single pursuit?  Or did they just assume that it didn’t matter, or that nobody would notice?  Either way, it amounts to a pretty serious limitation when you’re asking “should we legislate to ban all kinds of bicycle use except where the bicycle user is wearing a helmet?”

Killer cures

What kind of moron does not wear a helmet whilst riding a bike? Anyone that stupid deserves to have their brains scrapped off the road. —Dave, bloke commenting on the failed Melbourne bike share.

Cycle in London without a helmet?  You’d need your head examined… —Ross Lydall, Evening Standard transport correspondent.

The BMA, as a part of its policy to improve safe cycling supports compulsory wearing of cycle helmets when cycling for children and adults. —The British Medical Association

I know a lot of you find the whole helmets thing — whether they “help” or “work” or not — tiresome and unimportant.  Well tough.  Bicycle helmets are a medical intervention — a special kind of medical intervention — and whether or not medical interventions work and are worthwhile is always a fascinating subject.  More importantly, a large proportion of the general public and of journalists assume that helmets work, and the British Medical Association campaigns for compulsory bicycle helmet laws.  What the BMA does matters.  If the BMA endorses a medical intervention, we can’t dismiss arguments about it as tiresome and unimportant.

Archie Cochrane, the influential champion of modern evidence based medicine and one of history’s most underrated heroes, is said to have played a mischievous prank on colleagues.  In an age when doctor knew best, Cochrane managed to organise a randomised trial of two care regimens for recovering heart attack patients: extensive hospital care (which every doctor knew was what a heart attack patient needed) versus home care.  A few months into the trial he convened his colleagues in the monitoring group to break the bad news that eight home care patients had died versus four hospital care patients.  His colleagues’ fears had been proven correct: hospital treatment was clearly far superior to home treatment and the trial must be stopped immediately as it would simply be unethical to continue to subject patients to dangerous home care.  At which point Cochrane took another look at his notes and declared that, to his great embarrassment of course, he had misread his shorthand: eight hospital patients had died for only four home care patients.  After the awkward silence, the monitoring group all agreed that it was far too early to draw any conclusions from such small numbers and at such an early stage — it could be pure chance that more patients died in hospital care.  The trial went on and never did provide any evidence that hospital care is any better than home care.

It seems obvious that bicycle helmets are a good thing.  They save lives.  They prevent life-changing head injuries.  If your head is fast approaching concrete, you want something to intervene.  It’s common sense, right?  You’d be mad not to wear one.

But Cochrane and his fellow mid-20th century proponents of evidence-based medicine showed that facts do not always match common sense.  The obvious answer is not always the correct one.  The obvious common sense fact that hospital care is better than home care for recovering heart attack patients turned out not to be correct.  As a new generation of doctors recognised the importance of evidence-based medicine, randomised controlled trials were retrospectively carried out on nearly everything that doctors do.  And, oops, they discovered that a lot of practices that doctors had considered to be simple obvious common sense had actually been harming their patients, ruining lives and sometimes killing people.

For a long time I took a Pascal’s Wager on bike helmets: while I had been given various reasons to believe that even if there was a benefit from wearing one it was probably marginal, there was no good reason not to wear one.  But the lesson from Cochrane — that common sense can kill you — is that there could be a very good reason for not wearing one.  What if wearing a bicycle helmet actively increases your risk of injury and death while riding a bicycle?  We can’t just assume that it doesn’t.

How could bicycle helmets possibly be bad for you?  Concrete meets head: intervention surely a good thing?  As that great 21st century populariser of evidence-based medicine would say: I think you’ll find it’s a bit more complicated than that.  In helmets, as in most transport issues, we seem to be obsessed with the engineering and overlook the way that people behave.  Helmet efficacy is as much a question of psychology as it is physics.

Because the interesting aspect of helmet research is not so much how they affect your chances surviving an accident, but how they affect your chances of having an accident.  It all comes back to how road users behave, and there are reasons to believe that helmet use could change people’s behaviour in a way that increases the accident rate.  Many readers will already be familiar with the two most established lines of research: risk compensation and the safety-in-numbers effect.  I’ll look at those in more detail another time, but briefly, risk compensation proposes that we adjust our behaviour according to perceived risks — in this case, the cyclist wearing the helmet perceives himself to be at reduced risk, and happily cycles with less care; more importantly, the car, bus and truck drivers around him drive with less care.  The safety-in-numbers effect proposes that cyclists are safer when there are more cyclists on the road — both in that specific time and place, as other vehicles will have to slow and use caution around them; and in general, as other road users will be expecting to see cyclists and are more likely to know how to behave around them.  If the perception is that cycling is a dangerous extreme sport that requires a helmet, and if that perception puts people off cycling, then the safety-in-numbers effect is diminished.

It’s easy to dismiss these things without considering them: helmets are hard but simple; behaviour is soft but complicated.  It’s easier to go with common sense.  But common sense is often what bad science is made of, and common sense can kill you.

That doesn’t mean we can just assume that helmets are ineffective or bad.  With a medical intervention, you start from scratch, collect the data, and follow the evidence wherever it takes you.  This introductory post and its title are not supposed to bias our exploration of the evidence one way or another, only to get us beyond the unexamined assumption that helmets work.

So what’s the best evidence on bicycle helmets?  Named in honour of the pioneer Archie Cochrane, the Cochrane Collaboration systematically reviews the evidence for medical interventions.  A Cochrane Review looks carefully at all of the research that has been conducted on an intervention, considers the factors affecting the quality of each piece of research, and synthesises the results of all of the research to a conclusion which will generally be considered by medical practitioners to be the best knowledge we currently have on that intervention.  In a field that must always remain skeptical of the status quo and open to new evidence, a Cochrane Review is in practice considered to be the closest approximation we have to The Truth.  Good doctors don’t use their common sense, they use Cochrane Reviews.

The Cochrane Collaboration have reviewed the evidence for bicycle helmet efficacy.  This weekend, I’ve got half a dozen posts looking at that evidence, the way that it is presented by the Collaboration, and the evidence that the Collaboration has chosen to omit.

le in London without a helmet? You’d need your head examined…Cycle in London