Article Text
Statistics from Altmetric.com
Doctors have a prime role as diagnosticians and are encouraged to practise evidence based medicine (EBM). The classic Bayesian formulation of evidence based diagnostic testing1 relies on the estimation of a pretest probability, modified by the probabilistic estimate of test accuracy to produce a post-test probability. If this is high enough to cross a “treatment threshold”, then therapy is commenced. Alternatively, if it is low enough, then one disregards the possibility of the diagnosis.
This probability modifying philosophy of diagnosis isn’t the only approach currently practiced in medicine. Other diagnostic traditions exist such as the “anatomical” (the neurologist asking, “What level is the spinal cord lesion at?”), the “criterion based” (“Do they score enough for Kawasaki disease?”) and the “categorical” (a histopathologist asking, “Do those cells in that pattern looks like graft rejection?”) and are useful at other times and in other ways.2 But, when you try to break things down like this, you soon see that each is a simplification. Using estimates of accuracy for a single test to quantify the way that probability of disease changes underestimates how it is actually used. Many tests provide far more information about the patient and their condition than the simple presence or absence of disease (eg, the location of a tumour and its risk of complications) and diagnostic tests are often pieced together in a chain of information in order to arrive at the underlying problem.
What the paper by Sox3 in this issue of the Archives suggests is that we’re actually very poor at understanding the arithmetic components of diagnosis. This might have been predictable. The paper sits alongside a series of studies from the last 30 years which have demonstrated how poor – on average – doctors are at using test performance descriptors. And it doesn’t …
Footnotes
Competing interests: None.