Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Injuries are the leading cause of death among children and young people in the industrialised world and are a major contributor to disability.1 2 In the early 1990s, both the UK Department of Health3 and the Scottish Office4identified accidents as a priority area for action in their respective policy statements. However, efforts to formulate and implement local, national, and international preventive policies have been hindered, at least in part, by the paucity of reliable data on injury frequency, cause, and outcome.
Many countries compile routine data on injuries derived from mortality statistics, occupational records, or through incident reporting—for example, to police and fire departments.2 5-7 These data are of variable relevance and quality, however, and are often inaccessible. The establishment of specially designed injury surveillance systems is widely advocated as a prerequisite for the development and evaluation of injury prevention strategies, particularly at a local level.4 7 8
This paper reviews the published literature on injury surveillance based at accident and emergency departments and attempts to identify the characteristics of a successful injury surveillance system (ISS).
What is injury surveillance?
Surveillance has been defined as the “continuous analysis, interpretation and feedback of systematically collected data”.9 It implies a proactive mechanism for identifying problems and implementing appropriate preventive strategies on a routine basis. Injury surveillance may be regarded as a specific form of public health audit. It can be designed to generate information on both the numbers and characteristics of injuries, such as the injury location, circumstances, cause, and mechanism. This information is crucial for detecting trends in injury incidence, identifying risk factors, developing injury control measures, and assessing their impact. This process is thus an epidemiological means to a public health end, namely prevention. As it is likely that most moderate and serious injuries present to hospital accident and emergency departments, many ISSs have been implemented in this setting.
Who needs injury surveillance?
Information about injuries is required at both national and local levels.10 Nationally, injury data are used by government departments for policy making and priority setting; by researchers investigating epidemiology, treatment, and prevention; and by a range of other voluntary and commercial organisations interested in injury. Locally, injury data are required for planning health services, developing and implementing safety policies and standards, and for evaluating the effectiveness of interventions.
National injury surveillance systems
Many industrialised countries now have national ISSs based within accident and emergency departments designed specifically to monitor injury events. Examples have been reported from the USA,2Australia,11 Canada,12 and elsewhere in Europe (personal communication, Consumer Safety Institute, Amsterdam) (table 1).13-15 Other countries have identified the need for such a system. A national ISS is being developed in Sweden16 and a national minimum data set for emergency departments has been advocated in New Zealand.17
Local injury surveillance systems
National systems are, however, often insufficient for effective injury prevention. Differences in local conditions are likely to contribute to differences in the distribution of injury, making local analyses important.7 As a result, many community based groups and professionals have attempted to establish local ISSs to fill this gap.
Pioneering work in this field has been carried out in North America and Australia.12 18-21 The UK has been slow to develop local accident and emergency based surveillance schemes, with a few notable exceptions. In 1993 the Canadian Hospitals Injury Reporting and Prevention Programme (CHIRPP) system was imported into the accident and emergency department at the Royal Hospital for Sick Children, Yorkhill in Glasgow, the largest children’s hospital in Scotland.22 In Wales, the All Wales Injury Surveillance System (AWISS) was established in 1995 (table 2).23
In some centres, local data on injury are compiled by merging conventional accident and emergency department records with information on injuries.24 25 In one study, E codes were prospectively assigned to accident and emergency patient records on a trial basis to asses their usefulness. The accuracy of these E codes was estimated to be 98% when manually checked with case notes.25 Although such initiatives may provide valuable local data on broad causal categories, they do not provide detailed information on the injury circumstances, location, and mechanism.
Although most industrialised countries now operate some form of injury surveillance, there is little methodological consistency in their approaches. Sampling techniques, collection methods, collection location, data classification, and data coding vary between and within countries. Studies in the USA have identified wide variations in data collection practices in accident and emergency departments.26 27 The scope of the data and population coverage also vary widely. Some systems collect information on the whole population, others for specific age groups, normally children. Some collect information solely on injuries in the home, whereas others cover all injury types. Several outwith the scope of this paper collect information on the more serious end of the injury spectrum. For instance, data collected as part of the Childhood Injury Prevention and Promotion of Safety (CHIPPS) programme in Newcastle, UK, is based exclusively on admissions to hospitals.28 Until standardised variable definitions, classifications, and sampling techniques are developed and adopted nationally and globally, few opportunities for meaningful national and international comparisons will exist.29
Characteristics of a successful injury surveillance system
Research to date suggests that, to be successful, an ISS should be practical, valid, stable, relevant, accessible, and effective.7 22 30
The installation, operation, and maintenance of an ISS usually requires an investment of additional resources, technical support, and staff.22 30 To a great extent the smooth running of the system depends on the enthusiasm and commitment of the staff involved. Prior consultation with staff is a crucial part of introducing such a system and data collection must become integrated into the daily work routine. The ISS duties will then be seen as a core responsibility rather than a secondary, optional function that has a low priority. From the patients’ point of view, evidence suggests that patients are willing to cooperate in giving accounts of injury events if the questionnaire is concise and easy to complete.22 30
A valid ISS is one which generates information of a scientifically acceptable quality. Quality comprises several dimensions including representativeness, sensitivity, specificity, and accuracy. For epidemiological purposes the data collected should be reasonably representative of the reference population.7 Sensitivity is the capacity of the system to identify all cases of injury within the population and specificity is its capacity to exclude non-injuries. In practice, poor sensitivity is the most frequent defect. If only a minority of injured people are subject to surveillance (for example, only those admitted to hospital), the observed pattern of injury may bear little relation to the true pattern of incident injury in the population. While implementing an ISS in accident and emergency departments will identify greater numbers of injuries than an ISS using hospital admission data only, the large number of injuries presenting to primary care facilities remain excluded. The incompleteness of data collection at certain times of the day, or inaccurate coding of injury types, may also be problematic.30 A frequent source of inaccuracy arises because the ISS does not record severity, which is an important determinant of epidemiological characteristics.31 32 Incomplete or inaccurate data on injury severity may compromise the epidemiological potential of an ISS.
Relevance (sometimes called face validity) is often assumed rather than carefully considered. Data collected should be useful and relevant to policy makers and service providers. Many clinically based ISSs are of limited primary preventive value because they do not collect information on the preinjury phase (the circumstances surrounding the event), rather than the injury outcome. Conversely, clinicians may be disappointed by the non-clinical nature of the data included in an ISS which has been set up for public health as opposed to patient care purposes. Clarification of the aim of the ISS at an early stage should minimise the risk of this type of misunderstanding. The presentation and dissemination of the data are also critical to the perceived relevance of an ISS. Various methods of dissemination should be used to provide information to the wide range of parties involved in injury surveillance and control in ways that are appropriate to the target audience. Continuous feedback of the data is also valuable to promote accident and emergency staff morale and to maintain high quality data collection.
A key function of an ISS is the analysis of secular trends. This is only possible if definitions, denominators, sampling techniques, classification systems, and coding methods remain constant over time. Only some of these are under the control of those operating an ISS. For example, the current transition from one edition of the International Classification of Diseases (ICD 9) to another (ICD 10) will inevitably produce spurious time trends due to the application of different diagnostic or causal codes to the same clinical entities.
It is vital that an ISS is accessible. If potential users are unable to obtain information in a relevant and comprehensible format, the ISS will not fulfil its function. Some ISSs operate by pooling data centrally and generating aggregate tables which may exclude the possibility of local analyses, whereas others produce highly detailed data at a local level, making aggregation complex. Differences in local conditions are likely to contribute to the differences in the geographical distribution of injuries. Local analyses then become necessary.7 Those developing the ISS should first identify the needs of potential users of the ISS and then seek to optimise its accessibility.
Despite anecdotal and indirect evidence,8 33 there are remarkably few published scientific data on which to judge the impact of injury surveillance on the frequency or pattern of injury in a population. This may, in part, be due to the relatively short periods of time over which ISSs have been operating, or to the methodological problems involved in designing such studies. The most likely explanation is that insufficient thought has been given to evaluating these systems at the planning stage. The choice of an appropriate method of evaluation depends on the objective of the ISS. If the objective is to inform the development of a local injury prevention programme, a process based evaluation should be designed. If, however, the objective is to reduce the incidence of mortality or morbidity due to injuries, an outcome based evaluation is necessary.
Evidence suggests that injury surveillance in accident and emergency departments is a worthwhile and achievable objective when coupled with professional commitment and appropriate operational conditions, particularly at a local level. This paper highlights several issues to be considered when embarking on the design and implementation of an ISS. By designing a system which is practical, valid, relevant, stable, accessible, and effective, the prospects for the implementation and evaluation of evidence based preventive programmes will be greatly enhanced. There has been little research to date, however, on the impact of ISSs on injury frequency or injury patterns.
National and international comparisons continue to be fraught with methodological difficulties. With a few exceptions, the scope and coverage of national systems lack the consistency required to allow valid comparisons to be made. By identifying and remedying the methodological variance in current ISSs, public health agencies could improve the quality of epidemiological data on injuries, thereby enhancing the prospects for more effective injury control. The development of an agreed standardised surveillance methodology would also greatly improve the validity and reliability of data generated by new national and local ISSs that may be introduced in the future.
It should be recognised that even high quality data collected by ISSs at accident and emergency departments will seldom be truly comprehensive or representative. Public health agencies should therefore seek to combine several sources of data to generate a profile of the pattern of injury in a population. Ideally, these would include data on mortality, admissions to hospital, accident and emergency presentations, injuries presenting to primary health care facilities, and injury related disability.