Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Archimedes seeks to assist practising clinicians by providing ‘evidence-based’ answers to common questions that are not at the forefront of research but are at the core of practice (format adapted from BestBETS published in the Emergency Medicine Journal). A full description of the format is available online at http://bit.ly/ArchiTemplate.
Readers wishing to submit their own questions – with best evidence answers – are encouraged to review those already proposed at http://www.bestbets.org. If your question still hasn't been answered, feel free to submit your summary according to the instructions for authors at http://bit.ly/ArchiInstructions.
Confident in predicting? Meta-analysis models, step two
So, in a previous column1 I made a foray into the dangerous world of statistical models of meta-analysis. In this one, I’ll try hard to explain why we need to start doubting random effects meta-analysis more than we often have. To recap – fixed effects means that there is one truth that is unaltered between all settings and times and groups of patients. Random effects implies the truth varies across all or any of these fields, which means that we can only get at the ‘average’ effectiveness and only guess at how good it will be in our own setting. Each meta-analysis gives you a summary result, and confidence interval. In the case of a fixed effect analysis, this is the best guess of how good it is, and the confidence interval gives you a fair idea of where the truth really does lie. With a random effects result, it's similar, but the confidence interval tells you where the ‘average’ of the true effects is likely to be found. The effect in different settings may be even more extreme than this. What we'd like to know is what the variation in real effects might be (and this is very occasionally reported): it is the ‘prediction interval’. The prediction interval looks a lot like a confidence interval; it is the boundary where, given the information which we have from the review, we are 95% sure what the true effectiveness will be in different situations. It captures just how uncertain we really are about the truth in varied groups. If it's not been reported, you can calculate it, but you'll need to take a pencil, a sharp intake of breath/coffee/alcohol, and a look at the very readable paper by Riley and friends from the BMJ.2 (If you can't manage that, then on the whole, if you take half the confidence interval and extend it outwards by that value, you'll not be far off.) Now you should feel braver and more confident in looking a meta-analysis in the eye and asking ‘But is it really as accurate as all that?’.
Bob Phillips, Centre for Reviews and Dissemination, University of York, York YO10 5DD, UK; email@example.com
Competing interests None.
Provenance and peer review Commissioned; internally peer reviewed.