Article Text
Abstract
Aims The Paediatric Observation Priority Score (POPS) is a validated paediatric acuity assessment tool for use in emergency and acute care settings. We wished to assess the reliability of POPS by analysing inter-observer variation among nursing staff.
Methods Twelve participants were recruited from a single emergency department nursing team. They were shown video footage of a paediatric advanced nurse practitioner (PANP) assessing three children with different POPS scores. They were blinded to the POPS generated by the PANP and asked to formulate their own POPS score based on the recorded assessment.
Results Fleiss Kappa was utilised for statistical analysis of the individual observational parameters and an overall Kappa value for each case. Kappa values of 0.735 (good) and 0.660 (good) were seen in patients presenting with abnormal physiological observations, and complete agreement (Kappa value of 1) was demonstrated in a child with normal physiological parameters.
Conclusion This study provides evidence that inter-observer agreement in the use of POPS by different nurses in the assessment of sick children is ‘good’. Variation between users of scoring systems has previously been underinvestigated and this study will allow us to further refine POPS to improve its clinical utility.