PT - JOURNAL ARTICLE AU - Calvin Heal AU - Sarah Cotterill AU - Andrew Graeme Rowland AU - Natalie Garratt AU - Tony Long AU - Stephen Brown AU - Grainne O'Connor AU - Chloe Rishton AU - Steve Woby AU - Damian Roland TI - Inter-rater reliability of paediatric emergency assessment: physiological and clinical features AID - 10.1136/archdischild-2019-318664 DP - 2021 Feb 01 TA - Archives of Disease in Childhood PG - 149--153 VI - 106 IP - 2 4099 - http://adc.bmj.com/content/106/2/149.short 4100 - http://adc.bmj.com/content/106/2/149.full SO - Arch Dis Child2021 Feb 01; 106 AB - Objective The Paediatric Admission Guidance in the Emergency Department (PAGE) score is an assessment tool currently in development that helps predict hospital admission using components including patient characteristics, vital signs (heart rate, temperature, respiratory rate and oxygen saturation) and clinical features (eg, breathing, behaviour and nurse judgement). It aims to assist in safe admission and discharge decision making in environments such as emergency departments and urgent care centres. Determining the inter-rater reliability of scoring tools such as PAGE can be difficult. The aim of this study was to determine the inter-rater reliability of seven clinical components of the PAGE Score.Design Inter-rater reliability was measured by each patient having their clinical components recorded by two separate raters in succession. The first rater was the assessing nurse, and the second rater was a research nurse.Setting Two emergency departments and one urgent care centre in the North West of England. Measurements were recorded over 1 week; data were collected for half a day at each of the three sites.Patients A convenience sample of 90 paediatric attendees (aged 0–16 years), 30 from each of the three sites.Main outcome measures Two independent measures for each child were compared using kappa or prevalence-adjusted bias-adjusted kappa (PABAK). Bland-Altman plots were also constructed for continuous measurements.Results Inter-rater reliability ranged from moderate (0.62 (95% CI 0.48 to 0.74) weighted kappa) to very good (0.98 (95% CI 95 to 0.99) weighted kappa) for all measurements except ‘nurse judgement’ for which agreement was fair (0.30, 95% CI 0.09 to 0.50 PABAK). Complete information from both raters on all the clinical components of the PAGE score were available for 73 children (81%). These total scores showed good’ inter-rater reliability (0.64 (95% CI 0.53 to 0.74) weighted kappa).Conclusions Our findings suggest different nurses would demonstrate good inter-rater reliability when collecting acute assessments needed for the PAGE score, reinforcing the applicability of the tool. The importance of determining reliability in scoring systems is highlighted and a suitable methodology was presented.