Article Text

Download PDFPDF

Towards evidence based medicine for paediatricians
  1. Bob Phillips
  1. Evidence-based On Call, Centre for Evidence-based Medicine, University Dept of Psychiatry, Warneford Hospital, Headington OX3 7JX, UK; bob.phillips{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In order to give the best care to patients and families, paediatricians need to integrate the highest quality scientific evidence with clinical expertise and the opinions of the family.1Archimedes seeks to assist practising clinicians by providing “evidence based” answers to common questions which are not at the forefront of research but are at the core of practice. In doing this, we are adapting a format which has been successfully developed by Kevin Macaway-Jones and the group at the Emergency Medicine Journal—“BestBets”.

A word of warning. The topic summaries are not systematic reviews, through they are as exhaustive as a practising clinician can produce. They make no attempt to statistically aggregate the data, nor search the grey, unpublished literature. What Archimedes offers are practical, best evidence based answers to practical, clinical questions.

The format of Archimedes may be familiar. A description of the clinical setting is followed by a structured clinical question. (These aid in focusing the mind, assisting searching,2 and gaining answers.3) A brief report of the search used follows—this has been performed in a hierarchical way, to search for the best quality evidence to answer the question.4 A table provides a summary of the evidence and key points of the critical appraisal. For further information on critical appraisal, and the measures of effect (such as number needed to treat, NNT) books by Sackett5 and Moyer6 may help. To pull the information together, a commentary is provided. But to make it all much more accessible, a box provides the clinical bottom lines.

The electronic edition of this journal contains extra information to each of the published Archimedes topics. The papers summarised in tables are linked, by an interactive table, to more detailed appraisals of the studies. Updates to previously published topics will be available soon from the same site, with links to the original article.

Readers wishing to submit their own questions—with best evidence answers—are encouraged to review those already proposed at If your question still hasn’t been answered, feel free to submit your summary according to the Instructions for Authors at Three topics are covered in this issue of the journal.

  • Is gradual introduction of feeding better than immediate normal feeding in children with gastroenteritis?

  • Are follow up chest x ray examinations helpful in the management of children recovering from pneumonia?

  • Should preterm neonates with a central venous catheter and coagulase negative staphylococcal bacteraemia be treated without removal of the catheter?

How do we measure agreement?

How do we measure agreement—clinical agreement between observers—in order to indicate how good or bad at it we are? It’s a problem which is raised in the interpretation of chest x rays in the second of this month’s Archimedes topics. The statistic chosen to show the degree of agreement is kappa (κ). This statistic tells us how much agreement there is beyond chance. Take the situation of two observers reporting chest x rayssay, and classifying them as abnormal or normal. If they were to report an equal number of abnormal and normal films, then we would expect by chance alone the two observers to agree 50% of the time. Kappa tells you how much the agreement is beyond chance: in this instance 75% agreement would be a kappa = 0.5; 75% agreement is 25% beyond chance, and this is half of the “perfect” extra of 50%. (The reason we use kappa, rather than just taking 50% off the simple agreement between two observers and using that value is that agreement due to chance varies with how often the observers classify the chest x rays as abnormal or abnormal. If they were to report three normal to one abnormal, then we’d expect them to agree—by chance—62.5% of the time.) Exactly how to calculate kappa is a bit irrelevant, but for a rough guide to interpretation see table 1.

Table 1

Interpretation of kappa



  • Bob Phillips