Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Archimedes seeks to assist practising clinicians by providing ‘evidence-based’ answers to common questions that are not at the forefront of research but are at the core of practice (format adapted from BestBETS published in the Emergency Medicine Journal). A full description of the format is available online at http://bit.ly/ArchiTemplate.
Readers wishing to submit their own questions – with best evidence answers – are encouraged to review those already proposed at http://www.bestbets.org. If your question still hasn't been answered, feel free to submit your summary according to the instructions for authors at http://bit.ly/ArchiInstructions.
It's how mixed up? Meta-analysis models, step one
Well, I have to start with an apology. In one of these columns,1 I foolishly claimed that the difference between a fixed effect meta-analysis and a random effects meta-analysis was pointlessly academic. It's not. Now, this might start getting all statistical, but there is a clear and important difference. Meta-analysis comes in two main flavours: fixed and random. It's clinically important to understand what these things mean. ‘Fixed’ effects takes as an underlying truth that each of the studies in the meta-analysis gives us a glimpse of a single truthful ‘effect size’, and that any variation between them is through chance alone. Sometimes the results seem too mixed up – heterogenous – for this to be true. In this setting, we could consider using ‘random’ effects. ‘Random’ effects infers that the studies actually have different ‘true’ effects, and that all we can do is take an ‘average’ of the effect. This may be because it works differently in different populations (eg, hypertensives in black and Caucasian subjects) or because there are alternative dosing schedules which have different effects. It's often said that a random effects approach should only be used after all approaches to explain the heterogeneity have been attempted, perhaps by taking clinically sensible subgroups or by meta-regression. The reviewers and meta-analysts should make the decision based not primarily on the results of the meta-analysis, but on an understanding of the studies which make up their review. If this doesn't seem the case, then you can do it instead: look at the studies, decide if you think they can reasonably give a single true effect, and take a fixed effects approach. If they can't, take the random effects model and add a pinch of salt.
Bob Phillips, Centre for Reviews and Dissemination, University of York, York YO10 5DD, UK; email@example.com
Competing interests None.
Provenance and peer review Not commissioned; internally peer reviewed.