Article Text

Download PDFPDF

Paediatric education

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


P. Ramnarayan, G.C. Roberts, R. Kapoor1, C. Edwards, A. Tomlinson, J. Britto.

Imperial College London at St Mary’s, 1 Princess Alexandra Hospital, Harlow

Background: Textual case simulations in examinations act as a proxy to measure real-life clinical decision-making. This concept hinges on objectively measuring the quality of subjects’ decisions. Many discrete measures have been used for this purpose, most of them relatively insensitive.

Aim: To develop and test the reliability and validity of a single objective continuous score to measure clinical assessment plan quality (comprising differential diagnosis, investigations and management items), generated as in real life without cues from multiple choice prompts.

Methods: First, a consultant panel independently produced “gold standard” clinical assessment plans for each case. Using a pre-assigned visual analogue scale (0–4 for diagnoses [judgements] and −2 to +2 for tests and management [actions]), “gold standard” decisions were marked as right hand anchors. From a master list of ‘all subjects’ decisions’ generated for each case, irrelevant items were marked by the panel as left hand anchors. Each remaining item was then scored independently by the panel. Discrepancies were resolved by consensus. The contribution of each item to the overall quality of the plan was differentially weighted by squaring the judgements’ score and cubing the actions’ score. One point was taken away for each irrelevant item in the subject’s plan. The resultant raw score was normalised by expressing it as a proportion of ‘maximum achievable score’ for the case.

Results: Inter-rater reliability, assessed by the weighted kappa statistic, for panel’s scores was good (κ 0.60, p<0.01). Excellent face and content validity were demonstrated. Correlation with overall plan quality, as assessed by 9 different consultants on a 5 point scale, was high (Spearman rank correlation σ 0.65, p<0.01). Consultants’ scores were separated from students’ scores by 1 SD unit, as were scores for easy vs. difficult cases (good construct validity).

Conclusions: This objective, reliable and valid score for the measurement of clinical assessment plan quality can be used in the future to assess examinees’ decisions for case simulations, without the need for multiple choice prompts.


J.G.M. Crossley, H.A. Davies, C. Eiser.

Sheffield Children’s Hospital, UK

There is a proliferation of procedures requiring the measurement of doctors’ day-to-day performance – including ‘RITA assessment’ and revalidation. However, there is a serious shortage of frameworks and tools for the task. The shortage will make regulatory decisions ineffective and indefensible, and will adversely effect training and professional development. A Sheffield-based programme utilizes recent advances in educational theory to develop measures of the most important aspects of professional performance. This report picks up the challenge of the doctor-patient-interaction.

A model of robust objects of measurement was derived from a systematic review of the literature, triangulated with a consensus exercise with RCPCH tutors. 511 children and parents rated the relational and communication performance of 66 paediatricians across 350 consultations against the elements of the model using an assessment instrument.

A G-study shows that many factors influence the ratings of performance as expected from the model. These include the idiosyncrasies of the particular case (43%), the person giving the ratings (23%), the tendency of different doctors to perform differently: with girls or with boys (8%); in new-patient appointments or in follow-up (8%); and from the viewpoint of parents or children (7%). Children were consistently higher raters (3.4%), and female doctors were consistently rated more highly (0.6%). A factor analysis, and three separate hypothesis tests strongly suggest that the instrument is a valid measure of the doctor patient interaction. Finally, appropriate sampling can be used to control the unwanted effects on measurement (above); the combined ratings of 25 parents provides a reflection of a doctor’s relational and communication performance compared with other doctors that is both reproducible and discriminating across 80% of situations (better than a 3-hour MCQ). The structured feedback is ideal for training and professional development. This provides a highly feasible means of assessing a crucial area of performance with excellent reliability, validity and educational impact.


A.B. Isaacs, M.E. Blair.

Department of Paediatrics, Imperial College, London; Northwick Park Hospital, Harrow

Introduction: Development of self-directed independent learning skills and recognition of one’s own strengths and weaknesses are key aims of medical education. The aim of this study was to assess the ability of students to self evaluate paediatric history-taking and communication skills.

Methods: A formative semi-structured assessment proforma was devised. Twelve 4th year students were video-recorded during a consultation in paediatric outpatients. They independently assessed their performance by video using the proforma. This was compared with assessment by an independent observer (teaching fellow) using a weighted kappa agreement score (κw). Informed consent was obtained from families and students.

Results: The students’ mean score by self-assessment was 2.2 (95% CI 1.8 to 2.6) compared to 2.3 (95% CI 1.9 to 2.7) by the assessor (scale: 2=some deficiencies, 3=satisfactory). The overall agreement between students and assessor was 63% and κw = 0.47 (95% CI 0.38 to 0.56). Individual student agreement scores ranged from ‘fair’ (κw = 0.25) to ‘good’ (κw = 0.71). Agreement was stronger for history-taking (κw 0.49) than communication (κw 0.35). 8/12 (67%) of students under-scored themselves. 11/12 (92%) students rated the experience as ‘useful’ or better and thought that further self-assessment would be beneficial. Areas of good agreement: presenting problem in parents’ words, effect on family, immunizations, growth, introduction, summarising. Areas of poor agreement: description of presenting problem, social history, open questions, interruption.

Conclusion: Students demonstrated overall good agreement with an assessor in self-assessment of paediatric clinical skills. Agreement was stronger for history-taking content than communication skills. All students accurately identified specific areas for self-improvement. Students found video self-assessment to be a useful experience. Further studies using paired videos should be performed.


F. Pelz, R.M. Brooks, L. Horrocks, E.H. Payne.

Department of Child Health, University of Wales College of Medicine, Heath Park, Cardiff CF14 4XN, UK

Aim: Evaluation of the educational effectiveness of a multidisciplinary MSc course by assessing enrolment, expectations, satisfaction, performance and impact on career and professional practice.

Methods: 41 students (course entrants 1994–97). Semi-structured telephone interview data collected in 36 / 39 students by an independent researcher using a questionnaire piloted on 2 students (overall response rate 93%) were transcribed into themes and analysed inductively.

Results: Student demographics: 17 medical doctors, ‘Drs’ (Consultants, Training and NCGs) and 24 other non medical health professionals, ‘NDrs’ (Nurses, HVs, physiotherapists, occupational therapist, dietician); 4 students were male; mean professional child health experience: 10 years (2–32 years). 14 /36 (39%) students (5/13 Drs, 9/23 NDrs) chose the course explicitly because it was inter-professional. For 27/36 (75%) (11/13 Drs, 16/23 NDrs) multidisciplinary nature was one of 3 strongest features, giving new insights, a balanced variety of views, respect and equality between professionals, and a holistic approach to child health. The course aided desired career progression in 19/38 (50%) students (7/14 Drs, 12/24 NDrs). In 34/38 (89%) students (11/14 Drs, 23/24 NDrs) the course influenced their working practice (i.e. increased professional competence, broader knowledge, improved inter-professional communication skills). 9/38 (24%) students (3/14 Drs, 6/24 NDrs) used course ideas to innovate in patient or teaching services. In 34/38 (89%) students (10/14 Drs, 24/24 NDrs) the course developed their research skills and in 35/38 (92%) (11/14 Drs, 24/24 NDrs) their presentation skills. For all but 2 participants (1 Dr), the course met all or most of their expectations or objectives (94%). Course assessment (essays, exam, dissertation) results did not differ between Drs and Ndrs.

Conclusion: Multidisciplinary education at Masters level in Child Health creates a learning environment valued by Drs and NDrs alike, increasing perceptions of inter-professional skills, professional confidence and competence.


D.K. Pedley, L. Finlay, S. Tung, S. Mukhopadhyay.

Tayside Institute of Child Health, Ninewells Hospital, Dundee DD1 9SY

Aims: The pulse oximeter is widely used to measure oxygen status in seriously ill paediatric patients. Concern has been expressed regarding the knowledge and training of professionals in the U.K. in relation to using this device.1 Our work aims to define the extent of knowledge and training in junior doctors in a U.K. teaching hospital, and to evaluate the performance of a written teaching package.

Method: All 430 medical staff below consultant grade working in the Tayside University Hospitals Trust in January 2001 were randomly assigned to receive a written teaching package on the theory and applications of pulse oximetry or no communication at all (n=215 in each group). Two weeks later, all doctors were sent an anonymized structured questionnaire testing knowledge of pulse oximetry. Questionnaires were scored by the authors, blinded to group allocation.

Results: 125 questionnaires were completed and returned. 53 respondents had received the teaching package and 72 had not. Although 64% of doctors used the oximeter frequently in their work, only 23.8% had received any prior training on its use. Deficiencies were identified in knowledge of principles and clinical application of pulse oximetry. The mean score in the group who received the teaching package was significantly higher than in the group who did not (16.1, C.I. 15.4 to 16.9 v 13.7, C.I. 12.9 to 14.5, P< 0.001).

Conclusions: Despite the common use of pulse oximetry for monitoring sick patients, serious problems exist in the knowledge of medical staff in relation to this equipment. We suggest that this profound lack of knowledge is placing children at additional risk. A brief written teaching package can be extremely effective in improving the level of knowledge of saturation monitors in junior doctors, and may enhance the delivery of care to paediatric patients.



R.D. Palmer1, A.T. Fox1, E.S. Trewavas2, D. Sekeran1, J.G.M. Crossley3, H.A. Davies3.

1 Department of Paediatrics, Luton & Dunstable Hospital, Luton; 2 General Practice Registrar, Huntingdon VTS, Cambs; 3 Department of Paediatrics, Sheffield Children’s Hospital

Aim: Communication with medical colleagues is an essential part of high quality patient care and outpatient clinic letters constitute a significant component of any paediatrician’s communication workload. The aim was therefore to improve the quality of outpatient clinic letters intended for General Practitioners (GPs), within a large district general hospital paediatric outpatient department.

Methods: Using the ‘Sheffield Assessment Instrument for Letters (SAIL)’, a previously validated, inter-rater reliable and reproducible method of assessing the quality of written communication, fifteen unselected letters from all consultant and specialist registrars were analysed on two separate occasions. A paediatrician and a GP representing the stakeholders in the communication process performed the analysis, but neither was involved in the day-to-day care of the patients. Following individualised feedback the audit cycle was completed three months later without forewarning.

Results: All 7 doctors available for reassessment completed the audit loop. The mean quality score, derived for each letter from the summation of the 20-point checklist and a global score, improved form 23.3 (95% CI: 22.1–24.2) to 26.6 (25.8–27.4), p=0.001. Mean global scores also improved by 1.24 (0.93–1.55), p<0.002 for the paediatric assessor and 0.57 (0.14–1.01), p<0.01 for the GP assessor.

Conclusions: This study demonstrates that SAIL can provide feedback with a powerful educational impact. It also demonstrates an effective means of improving the quality of outpatient letters provided to the GP, as is required by clinical governance. Further, the use of such a tool may be valuable in the revalidation and appraisal process currently being developed in the UK and in individual preparation for this.