A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but in addition illustrates most of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted over a paper published inside the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director in the Center for Tobacco Control Research and Education at the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is certainly named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: put simply, to discover whether use of e-cigs is correlated with success in quitting, which could well mean that vaping helps you stop trying smoking. To achieve this they performed a meta-analysis of 20 previously published papers. Which is, they didn’t conduct any new research entirely on actual smokers or vapers, but alternatively made an effort to blend the results of existing studies to determine if they converge on the likely answer. This can be a common and well-accepted strategy to extracting truth from statistics in many fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online in addition to through the university, is the fact that vapers are 28% less likely to stop smoking than non-vapers – a conclusion which would advise that vaping is not just ineffective in quitting smoking, but usually counterproductive.
The end result has, predictably, been uproar from the supporters of Free E Cig Trial within the scientific and public health community, specifically in Britain. Among the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) in the United states, who wrote “it is clear that Glantz was misinterpreting the data willfully, rather than accidentally”.
Robert West, another British psychologist and the director of tobacco studies with a centre run by University College London, said “publication of the study represents an important failure of the peer review system in this particular journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction within the National Addiction Centre at King’s College London, said “this review is not really scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, would be the problems these eminent critics discover in the Kalkhoran/Glantz paper? To answer some of that question, it’s necessary to go beneath the sensational 28%, and look at what was studied and just how.
Meta-analysis is a seductive idea. If (say) you have 100 separate studies, all of 1000 individuals, why not combine them to create – essentially – just one study of 100,000 people, the outcomes that should be a lot less susceptible to any distortions that may have crept into a person investigation?
(This may happen, for instance, by inadvertently selecting participants using a greater or lesser propensity to quit smoking due to some factor not considered by the researchers – a case of “selection bias”.)
Of course, the statistical side of a meta-analysis is quite modern-day than just averaging out your totals, but that’s the general concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
If its results should be meaningful, the meta-analysis has to somehow take account of variations in the style of the patient studies (they could define “smoking cessation” differently, as an example). When it ignores those variations, and tries to shoehorn all results into a model that a number of them don’t fit, it’s introducing its very own distortions.
Moreover, in the event the studies it’s according to are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge produced by the reality Initiative, a Usa anti-smoking nonprofit which normally takes an unwelcoming view of e-cigarettes, in regards to a previous Glantz meta-analysis which comes to similar conclusions to the Kalkhoran/Glantz study.
In a submission a year ago to the United states Food and Drug Administration (FDA), answering that federal agency’s demand comments on its proposed e-cigarette regulation, the facts Initiative noted which it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of them have been included in a meta-analysis [Glantz’s] that states to demonstrate that smokers who use e-cigarettes are more unlikely to quit smoking compared to those that usually do not. This meta- analysis simply lumps together the errors of inference from all of these correlations.”
Additionally, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of such meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to receive an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote inside the Lancet Respiratory Medicine – exactly the same journal that published this year’s Kalkhoran/Glantz work – that this studies a part of their meta-analysis were “mostly observational, often without any control group, with tobacco use status assessed in widely disparate ways” though he added that “this is not any fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis can only be as good as the investigation it aggregates, and drawing conclusions as a result is only valid in the event the studies it’s based on are constructed in similar ways to one another – or, at least, if any differences are carefully compensated for. Of course, such drawbacks also affect meta-analyses which are favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work rise above the drawbacks of meta-analyses generally, and focus on the specific questions posed by the San Francisco researchers and the ways they tried to respond to them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying the incorrect people, skewing their analysis by not accurately reflecting the true variety of e-cig-assisted quitters.
As CASAA’s Phillips points out, the e-cigarette users in the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes when the studies on their quit attempts started. Thus, the study by its nature excluded people who had started vaping and quickly abandoned smoking; if these people exist in large numbers, counting them might have made e-cigarettes seem an infinitely more successful path to quitting smoking.
An alternative question was raised by Yale’s Bernstein, who observed which not all vapers who smoke want to give up combustibles. Naturally, those that aren’t wanting to quit won’t quit, and Bernstein observed that if these folks kndnkt excluded from your data, it suggested “no effect of e-cigarettes, not that e-cigarette users were more unlikely to quit”.
Excluding some who did manage to quit – then including anyone who has no goal of quitting anyway – would likely appear to affect the result of research purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz reason that their “conclusion was insensitive to a variety of study design factors, including if the research population consisted only of smokers interested in smoking cessation, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not only meta-analyses, and not just these particular researchers’ work – and, importantly, is often overlooked in media reporting, as well as by institutions’ public relations departments.