Aim: To evaluate the impact of reorganisation of the health service and a change in the definition used to collect immunisation coverage statistics on vaccine coverage data in England.
Methods: Denominator data from the Cover Of Vaccination Evaluated Rapidly (COVER) programme, the national programme for the collection of immunisation coverage statistics, were compared to the Office for National Statistics (ONS) population data; the impact of any discrepancies between the two data sources on vaccine coverage was assessed.
Results: ONS populations were generally larger than COVER populations. This was particularly true for 2002, the year Primary Care Trusts (PCTs) came into existence, suggesting that some children are being missed by the COVER programme. On average, in 1998–2001 around 10 000 children per year (∼2%) were lost to the COVER population estimates compared to data from ONS. This increased to around 20 000–40 000 (∼3–8%) children in 2002, but decreased again in 2003 to 2000–8000 (∼1%) children. Assuming all the “lost” COVER children were vaccinated, vaccine coverage appeared very similar to that seen in the COVER programme for all antigens. However, assuming all the “lost” children were unvaccinated, coverage would be substantially lower for all antigens (range 2.7–3.5%).
Discussion: This analysis provides a quantitative example of how changes such as restructuring of the health service directly impact on public health surveillance. Such changes have potential risks for information and may affect important data used to inform public health policy.
- vaccine coverage
- childhood immunisation
- vaccine uptake rate
Statistics from Altmetric.com
Cover Of Vaccination Evaluated Rapidly (COVER) is the national programme for the collection of immunisation coverage statistics and plays a crucial role in the surveillance of vaccine preventable disease and informing public health policy. Reorganisation of the health service bears important implications for such public health surveillance. Primary Care Trusts (PCTs) were introduced in 2002, replacing the old health authorities.1 Simultaneously, the definition used to collect COVER data changed.2 In the past, denominators were based on resident health authority or trust populations. In 2002 this changed to responsible populations, based on a combination of general practitioner (GP) registration and residence for unregistered patients. We conducted an analysis to see how these changes might have affected vaccine coverage data in England.
COVER immunisation statistics are derived from data held in computerised child health systems (CHS).3 These data are collected quarterly and annually; however annual figures were used for the purpose of this analysis. We used denominator data from the COVER programme for 1, 2, and 5 year olds for each year between 1998 and 2002 in the analysis. Data for 5 year olds in 1998 were not available. These data were compared to corresponding official mid-year population estimates from the Office for National Statistics (ONS). Ratios were calculated by dividing COVER with ONS populations. Confidence intervals for these ratios and differences between ratios in different years were based on the distribution of the COVER:ONS ratios across the 28 Strategic Health Authorities in England. Analysis of variance and paired t tests were used to compare ratios between years. The actual discrepancy, in terms of the number of children, was also calculated between the two data sources for each year.
As COVER populations tended to be smaller than ONS populations, we also assessed the maximum and minimum impact these “lost” children could potentially have on vaccine coverage, depending on their vaccination status. Firstly, coverage was calculated using the ONS population as the denominator and the number of children who had completed immunisation, according to COVER statistics, as the numerator. This calculation of coverage presumed all the “lost” COVER children were unvaccinated. Secondly, coverage was calculated again using the ONS population as the denominator; however, the numerator included not only the number of children who had completed immunisation, according to COVER statistics, but also the number of children “lost” from COVER when compared to ONS population data. This calculation of coverage presumed all the “lost” COVER children were vaccinated. These coverage estimates were then compared to actual coverage data from the COVER programme for each year, age group, and antigen. The antigens included diphtheria, tetanus, pertussis, polio, Haemophilus influenzae type b (Hib), meningococcal group C (MenC), and measles, mumps, and rubella (MMR).
All previous analyses were done using ONS population data from the 1991 census. When data based on the 2001 census became available, the actual discrepancies in the number of children for 2001 and 2002 were recalculated using these more up-to-date and accurate data. This also included data from 2003, as these were not available from the 1991 census.
Using 1991 census
The ratios of COVER to ONS populations for 1 year olds differed significantly (p = 0.01) between 1998 and 2002 (fig 1). When the years 1998–2001 only were compared, no significant difference (p = 0.07) was evident between them; however, they did differ significantly from 2002 (p < 0.001), suggesting the overall difference seen between 1998 and 2002 was attributable to a change in 2002. A significant (p < 0.001) difference was also observed between the ratios for 2 year olds seen in 1998–2002; this appeared to be due to a difference in 1998. Regional boundary changes were implemented in 1998, and these may have led to additional problems with data quality in that year. However, the point estimate for this age group for 2002 is still lowest compared to the other years, including 1998, and the ONS population is indeed higher than the COVER population, as opposed to 1998. The trend seen for 5 year olds is similar to that for 1 year olds. The ratios of COVER to ONS populations differed (borderline significance; p = 0.059) between 1999 and 2002. When the years 1999–2001 were compared, no significant difference (p = 0.149) was evident between them; however, they did differ significantly (p = 0.023) from 2002, suggesting a change in this year.
On average, in 1998–2001 around 10 000 children per year (∼2%) were lost to the COVER population estimates compared to data from ONS. This increased to around 20 000–40 000 (∼3–8%) children in 2002.
Assuming all the “lost” COVER children were vaccinated, vaccine coverage appeared very similar to that seen in the COVER programme for all antigens. However, assuming all the “lost” children were unvaccinated, coverage would be substantially lower for all antigens (range 2.7–3.5%; fig 2).
Using 2001 census
The 2001 census population data gave slightly higher estimates for “lost” children (between 681 and 775 additional children were “lost” to COVER in 2001; between 739 and 1747 additional children were “lost” to COVER in 2002). As 2003 ONS population estimates also became available with this census, the number of children “lost” to the COVER population estimates this year when compared to data from ONS could be calculated; this gave a difference of 2000–8000 (∼1%) children between the two data sources (table 1).
What is already known on this topic
It is suspected that NHS reorganisation affects the quality of information, including immunisation coverage statistics, because systems have to be reconfigured, although very little exists in the literature on this
If it is assumed that ONS population data are the gold standard, then it appears that recent changes in organisation of the health service and in COVER population definitions have had a definite impact on the quality of data in the COVER programme. It was previously thought that COVER populations were inflated in some areas of high population mobility because some children were not being removed from the CHS when they moved away.4 Their vaccinations would no longer be recorded on that system because they would take place out of the area, so the children would appear unvaccinated. In addition, COVER numerators could be further underestimated due to records of vaccination not being shared between some CHSs, particularly those serving highly mobile populations. These factors would lead to an underestimation of vaccine coverage.4 This analysis however, shows that COVER populations in the study period tend to be smaller than ONS populations, with this difference being significantly greater in 2002. Following the change in denominator definition from resident to PCT responsible populations in 2002, anecdotal evidence suggests there was a delay in assigning unregistered children to the PCT of residence on the appropriate CHS. This could have resulted in the denominator change observed and it may be that these unregistered children are less likely to access full primary care and thus be at greater risk of being unvaccinated. If these missing children have a propensity to be unvaccinated, the potential implications for vaccine coverage estimates are that they will be overestimated rather than underestimated, particularly in 2002.
Encouragingly, 2003 COVER data suggest that the child health computer systems have begun to incorporate the reorganisational and population definition changes that occurred in 2002. This confirms the observation that it takes CHSs approximately two years to adjust to any reorganisational change.2 Precise identification of outstanding problems and investigation of variations between and within regions requires more detailed analyses, particularly at the PCT level.
To our knowledge, this analysis is the first to provide a quantitative example of how restructuring of the health service directly impacts on public health surveillance. One of the most important functions of the surveillance of vaccine preventable disease, of which monitoring vaccine coverage is a key component, is to gather information to design and implement the most effective immunisation programme for disease control and prevention. Reorganisation changes have potential risks for information and may affect important data used to inform public health policy. Other current ongoing national changes, such as the development of the National Programme for Information Technology (NPfIT) Connecting for Health, and future mergers of PCTs, may have similar implications and pose potential risks to this crucial information.
What this study adds
This study shows the quantitative impact of NHS reorganisation on important health surveillance data used to inform public health policy. Although the changes appear transient, there are implications that the data need to be interpreted with caution during the transition
Future NHS changes will also have implications for such data; we need to bear this in mind and try to be one step ahead
The authors thank Dr Richard Pebody for providing population data from ONS and all the participants of the COVER programme. We also thank Ana Da Costa for all her technical assistance.
Published Online First 16 February 2006
Competing interests: none
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.