Article Text
Abstract
Objectives To evaluate the reliability and validity of a children's carers’ feedback tool, to explore the feasibility of delivering this nationally and to determine acceptability to doctors of this assessment.
Participants 122 UK paediatricians on the specialist register undertaking outpatient consultations.
Design Participants were each sent 50 forms for distribution to carers. Mean scores for each question, and for the overall pilot cohort were returned to participants with verbatim free text comments. Participating paediatricians’ views were sought before and after receiving feedback.
Results 122 doctors returned 4415 forms (mean 36 per doctor). All doctors scored highly with scores across all returned forms having a median of 4.58 (IQ range 0.17) where the maximum score was 5. Differences were observed between scores from female compared to male carers (p<0.05), from consultations rated by carer and child compared to carer alone (p<0.05) and from carers who had previously met the doctor compared to those in their first consultation (p<0.001). ‘White’ doctors received higher ratings than ‘non-white’ doctors (p<0.05) and white patients rated both white doctors and non-white doctors more highly than non-white patients (p<0.01). A minimum of 25 consultations rated by children's carers are needed for acceptable reliability. 93.9% of participants would be happy to be assessed in this way for the purposes of revalidation.
Conclusions National delivery of a valid and reliable method of carer feedback is feasible. The scores received and acceptability in these self-selected doctors was high. Confounding variables may influence feedback, so guidance on interpretation may be needed.
Statistics from Altmetric.com
Footnotes
-
Funding This study was supported by a grant from the Academy of Royal Medical Colleges.
-
Competing interests None.
-
Provenance and peer review Not commissioned; externally peer reviewed.