Statistics from Altmetric.com
Why was a new assessment needed?
A new paediatric postgraduate curriculum was introduced in the UK in 2007.1 The training programme is based on three levels of training, and at each stage, progress is assessed by a range of different assessment tools. The assessment strategy2 outlines those assessments expected. These meet the Postgraduate Medical Education and Training Board's principles for assessment3 and use the utility model,4 which takes into account five variables: reliability, validity, educational impact, acceptability and cost. These variables are weighted dependant on the context and the purpose of the assessment. In line with good practice in assessment,5 the evidence from the workplace and the examinations are triangulated to make an overall judgement about a trainee's fitness to practice, and this information is submitted to the Annual Review of Competency Panel who, following the guidance,6 may approve the progress of the trainee. The ability of the available assessment tools to assess the trainee across the framework of competences was examined. This revealed that there was a relative lack of available tools to assess those competences to be acquired at the later stages of training.
In the final stages of training, assessment relies entirely upon workplace assessments, predominately multi-source feedback and case-based discussion (CBD). Multisource feedback in paediatrics collects structured feedback from a range of healthcare professionals who work with the trainee and are able to give informed judgements about the trainees' performance. The rating tool consists of 24 questions across all domains of good medical practice rated on a six-point scale from very poor to very good. There is, in addition, the opportunity for free text comments. There is increasing evidence that this type of feedback from colleagues and patients is reliable,7,–,9 and that has the ability to be able to discriminate between …