The white papers, quality indicators and clinical responsibility
Abstract
The coalition government has set out its stall in a cluster of white paper consultation documents. One theme to emerge is a commitment to monitoring outcomes. It is outcomes, underpinned by National Institute for Health and Clinical Excellence quality standards, that are to be used to regulate the NHS and these will be made available to the public. This paper sets out the importance of measuring quality in the NHS and some of the principles involved in analysis, presentation and interpretation of results. Clinicians have a duty to improve patient care and measurement and comparison is one of the tools at their disposal. Clinical involvement in the development of metrics and quality indicators is essential for meaningful results and it is vital that clinicians now take ownership of the quality of the clinical data captured on their patients.
Introduction
The coalition government has set out its stall in a cluster of white paper consultation documents.1–3 A number of themes emerge but one that will be of direct interest to frontline clinicians is the commitment to monitoring outcomes.1 It is outcomes that are to be used to regulate the NHS and these will be made available to the public on government websites,4 through NHS Choices5 and by a number of independent information specialists.
It is important for frontline clinicians to understand the reasons for this approach. The World Health Organization's (WHO's) World health report 2000 was devoted to improving performance.6 The introductory paragraph states ‘The difference between a well-performing health system and one that is failing can be measured in death, disability, impoverishment, humiliation and despair’. Strong words perhaps, but the clear implication is that any government of a civilised society requires information to reassure itself, its subjects and the international community of the quality and equity of its healthcare system. Internationally it is not just WHO that takes an interest in such matters; the Organisation for Economic Cooperation and Development (OECD) has a Health Care Quality Indicator project7 which aims to develop a set of indicators to compare health services across member countries (including the UK).
In order to achieve optimal outcomes, the government has recognised the importance of adherence to quality standards and the National Institute for Health and Clinical Excellence (NICE) has been given the task of working with professionals to develop these standards for priority areas. Assessment of service quality requires the development of metrics and quality indicators.
Metrics and quality indicators
A metric refers to multiple or sequential numerical measurements of an attribute about a patient or service. For example, the monthly measurement of methicillin-resistant Staphylococcus aureus (MRSA) incidence is an MRSA metric.
A quality indicator (QI) is the use of one or more measures or metrics to provide information about change in the context of an objective, target or goal. For example, the reduction in overall incidence of MRSA in an institution over time is an indicator of the level achievement of infection control objectives. QIs do not provide a direct measure of service quality, but they do indicate which services are likely to benefit from further investigation. Metrics and QIs are first and foremost tools that healthcare professionals can use to improve the quality of the services they provide. It was Lord Kelvin (1824–1907) who said ‘if you cannot measure it, you cannot improve it’. Although this statement may not be true in its entirety, measurement is still a good starting point for those services which are amenable to this approach. The measurement of healthcare can be divided into the following three broad categories.8
Adequacy of service provision
The structure of the service may be examined to determine whether published standards are met such as those in the national service frameworks. This could include the availability of facilities as well as having the appropriate number of trained staff and the required protocols and pathways in place.
Process of care
This relates to the provision of optimal clinical practice and may be used to monitor whether guidelines or care pathways are followed. Examples might include:
the time to brain scan in patients developing acute stroke as this is critical for effective thrombolysis treatment
the percentage of mothers in preterm labour receiving timely antenatal steroids as this has been proven to reduce the severity of lung disease in infants.
The value of these measurements is determined by the underlying evidence base and its importance to clinical outcomes. Process measures are the easiest to interpret and the most likely to result in rapid improvement. On the other hand, surfacing the information can be expensive especially if case note review is required.
Outcome of care
This relates to the benefits which patients experience as a result of care and is the domain which has received the greatest emphasis in the white paper documents. Although judged to be most important, indicators in this category are the most difficult to interpret when used to assess service quality, because the outcome of any treatment will depend on a multitude of factors. For example, mortality will depend upon the illness severity, age of the patient and co-morbidities present, to name a few. Also, mortality may be affected by decisions taken by patients (how soon they seek medical advice), general practitioners (when and to whom they refer), hospital clinicians (accuracy of diagnosis and effective and timely treatment) and community services (quality of aftercare, social and family support). For these reasons, outcome measures are often subject to statistical adjustment to make allowance for some of the confounding factors.
Importance of good quality data
Measuring the quality of care, whether it is by process or outcome, requires good quality data. The most comprehensive dataset available for England is the hospital episode statistics (HES).9 Collected on all hospital admissions since 1989, this national resource contains coded information about diagnoses and procedures on every inpatient. These data are collected using a trust's patient administration systems (PAS). HES data which are held by the NHS Information Centre (IC) are linked10 to the national mortality statistics available through the Office for National Statistics (ONS) and can also be linked by the IC to other databases.
One of the perceived problems with HES is the fact that clinicians are often divorced from the process of coding the data, leading to inaccuracy.11 The Royal College of Physicians (RCP) has done a great deal of work through the iLab project to attempt to improve clinical engagement.12 They concluded that HES was not suitable for monitoring the performance of individual consultant physicians because it was originally designed for administrative purposes which does not relate well to current working practices. Furthermore, longstanding clinical disengagement from the validation and use of HES data was cited as one of the reasons for poor data quality. In a previous paper13 there was a call for a change in culture and process along with much greater clinician engagement in data collection and validation. In recent years some clinicians have become more aware of the importance of coding because of Payment by Results (PbR),14 although this may lead to coding to maximise income rather than for clinical accuracy. However, it is essential that clinicians become much more involved in the future as there is an intent to use this information to judge the quality of care they provide.3 Indeed, HES data are already used to develop quality indicators which are published by NHS Choices.5 A discussion document outlining seven key issues that need to be improved to make HES more useable and clinically relevant has been published by the Academy of Royal Medical Colleges.15
The white paper documents3 suggest that QIs will be published at increasing levels of disaggregation from trust level, down to specialty and even consultant-led teams. As data become more disaggregated, casemix adjustment becomes more difficult and data quality more critical. Other data sources, such as national audits and specialist databases, will also be used, but linkage to HES often enhances the scope of the information available. At a regional level, HES data are used by the quality observatories16 to support quality monitoring and improvement.
Mortality ratios
Mortality is one of the easiest outcomes to measure as it is unequivocal and always accurately recorded by ONS. In order to compare death rates between organisations or geographical locations the standardised mortality ratio (SMR) has been developed.17 In this method all deaths following a procedure are used to determine the relative contribution of independent variables such as age, sex, co-morbidities and illness severity. Each patient is allocated a risk of death. If the organisation achieves a death rate that is matched by the risk profile of its patients then the SMR will be 100%. A low SMR indicates that the service is doing better than expected and a high value the converse. In the hospital standardised mortality ratio (HSMR) this technique is applied to a basket of 56 common diagnostic groups, in fact the ones that are associated with 80% of hospital deaths.18 HES data are used as the data source for the risk adjustment calculations. Recently a new mortality indicator has been developed for the NHS, called the summary hospital level mortality indicator (SHMI).19 This is not an indicator of quality, its value lies in the opportunity to flag up hospitals with excessively high mortality so that hospital management boards can investigate and determine whether there is a problem that needs to be addressed. Ranking hospitals by SHMI will not be useful, but the values are likely to be mandated as part of a trust's ‘quality accounts’.
Patient-reported outcome measures
Another approach to obtaining outcome data is to determine how the patients evaluate the results of treatment using patient-reported outcome measures (PROMs).20 This is not easy because the questions used have to be validated for each condition. Standardised quality of life questionnaires may be used to make comparisons across conditions. For example, to determine whether patients appear to get most benefit from hernia or knee surgery. However, if a small hernia does not impact on quality of life scores, then the repair will not show improvement. This does not mean that hernia repair is not worthwhile. Hernia repair might rate as highly beneficial on a questionnaire validated for evaluating this procedure. For this reason, both types of assessments are necessary. PROMs probably gain the greatest traction when used to measure the outcome of a single procedure, such as a hip replacement. Even in relatively well-controlled environments, external factors such as casemix and patient expectations will affect the results. Also long-term outcomes, such as the longevity of a replacement hip, will not be evaluated by this means. Currently the Department of Health is piloting the use of PROMs in four surgical areas (hip and knee replacement, hernia repair and varicose veins surgery).21 The results are linked to HES and available for different providers on HES Online.22 The new outcome framework suggests that many more PROMs will be developed.1
Attributes of a good quality indicator
A good QI should aim to measure something that is:
unequivocaly
practical to measure
important to clinical practice
under-pinned by good evidence
amenable to change.
In this respect it is better to have a few good indicators than a plethora which do not meet these criteria.23 One approach that is being adopted by many specialty groups is to develop QIs and metrics around a pathway of care such as the stroke pathway.24 In this way, QIs can be used to ensure that the important clinical decisions in the pathway are achieved for the majority of patients.
Presentation of quality indicators
QIs may be collected for a variety of purposes.23 Service improvement is one of the most important and these QIs must be fed back to those delivering the service. QIs may also be collected for commissioners to ensure that the service meets their specification, for patients/the public to gain reassurance about the safety and effectiveness of care and by the government to demonstrate good governance to the taxpayer and to the international community. These requirements are different and not only affect which indicators are collected, but also the way the data are presented. It is inevitable that the majority of QI data will be open to public scrutiny because of the white paper imperative around transparency and public accountability.2 Thus it is important that the data presented are intuitive and do not readily lead to false conclusions. For example, ranking data is often misleading because it is natural to assume that a service ranked 1 is better than a service ranked 21. In fact, the difference between these two may simply be a matter of chance. One alternative approach is to compare performance using a funnel plot.25 In this approach the numerator (proportion of patients with adverse outcome) is shown on the vertical axis and the denominator (population studied) is shown on the horizontal axis. Confidence intervals (CI) are drawn on the graph, normally at the 95% and 99.8% level to identify two levels of outlier. The CI resembles a funnel because the CIs are wider for organisations with a smaller number of eligible patients. This is one reason why small hospitals are more likely to be at the top and the bottom of any ranking system. The advantage of this approach is that it easily identifies the statistical outliers, either positive or negative. It is important to appreciate that a statistical outlier is not necessarily a clinical outlier, because it is impossible to account for all possible variability in the data. QIs should be used to trigger an internal investigation when outliers are detected, but the conclusion might be that the service quality is not to blame for an apparent adverse outcome.
Conclusion
Analysis of the white paper documents makes it abundantly clear that the direction of travel for the NHS is quality improvement through measurement and reporting. But why should clinicians choose to get involved? Firstly, they have a duty to strive to improve quality of care and patient outcomes. QIs and metrics are important tools along with research and development, audit, clinical guidelines and care pathways to achieve this aim. Secondly, QIs can be used to promote equity across the service and across clinical networks. Thirdly, patients have a right to expect that evidence-based practice will be implemented in a timely and comprehensive way across the NHS. Finally, QIs will only improve patient care if clinicians assist in their development, own the indicators and act on the results.26
Clinicians also need to take ownership of the HES data collected on their patients to assure its accuracy. This will normally involve meeting regularly with clinical coders to review the data submitted by the trust. It also involves training juniors to appreciate the importance of the data. Only in this way will data quality improve. Although the connection might not be obvious, high quality data will improve patient care through the mechanisms described above.
Competing interests
The author is a national clinical lead for hospital specialties at the NHS Information Centre.
Acknowledgements
The author would like to thank Brian Derry for his expert review and comments, which has significantly improved the clarity of the text.
- © 2012 Royal College of Physicians
References
- ↵ Department of Health. Liberating the NHS: transparency in outcomes–a framework for the NHS 2010London: DHwww.dh.gov.uk/en/Consultations/Liveconsultations/DH_117583
- ↵ Department of Health. Equity and excellence: liberating the NHS 2010London: DHwww.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_117794.pdf
- ↵ Department of Health. Liberating the NHS: an information revolution 2010London: DHwww.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_120598.pdf
- ↵
- ↵
- NHS Choices
- ↵ World Health Organization. The world health report 2000. Health systems: improving performance 2000Geneva: WHOwww.who.int/whr/2000/en/
- ↵
- Arah OA,
- Westert GP,
- Hurst J,
- Klazinga NS
- ↵
- ↵
- ↵
- HESonline
- ↵
- Audit Commission
- ↵
- Croft GP.
- ↵
- Williams JG,
- Mann RY
- ↵
- Audit Commission
- ↵
- Spencer SA.
- ↵
- West Midlands Quality Institute
- ↵
- London Health Observatory
- ↵
- ↵
- National Quality Board
- ↵
- Black N,
- Jenkinson C
- ↵ Department of Health. Guidance on the routine collection of patient reported outcome measures (PROMs)www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_092625.pdf
- ↵ Information Centre for Health and Social Care. HES online: patient reported outcome measures (PROMs) monthly report 2010www.hesonline.nhs.uk/Ease/servlet/ContentServer?siteID = 1937&categoryID = 1295
- ↵
- Raleigh VS,
- Foot C
- ↵ Acute stroke and TIA algorithm 2: stroke pathway. www.nice.org.uk/nicemedia/live/11646/38892/38892.pdf.
- ↵
- ↵
Article Tools
Citation Manager Formats
Jump to section
Related Articles
- No related articles found.