Skip to main content
Advertisement
  • Loading metrics

A Multifaceted Intervention to Implement Guidelines and Improve Admission Paediatric Care in Kenyan District Hospitals: A Cluster Randomised Trial

Abstract

Background

In developing countries referral of severely ill children from primary care to district hospitals is common, but hospital care is often of poor quality. However, strategies to change multiple paediatric care practices in rural hospitals have rarely been evaluated.

Methods and Findings

This cluster randomized trial was conducted in eight rural Kenyan district hospitals, four of which were randomly assigned to a full intervention aimed at improving quality of clinical care (evidence-based guidelines, training, job aides, local facilitation, supervision, and face-to-face feedback; n = 4) and the remaining four to control intervention (guidelines, didactic training, job aides, and written feedback; n = 4). Prespecified structure, process, and outcome indicators were measured at baseline and during three and five 6-monthly surveys in control and intervention hospitals, respectively. Primary outcomes were process of care measures, assessed at 18 months postbaseline.

In both groups performance improved from baseline. Completion of admission assessment tasks was higher in intervention sites at 18 months (mean = 0.94 versus 0.65, adjusted difference 0.54 [95% confidence interval 0.05–0.29]). Uptake of guideline recommended therapeutic practices was also higher within intervention hospitals: adoption of once daily gentamicin (89.2% versus 74.4%; 17.1% [8.04%–26.1%]); loading dose quinine (91.9% versus 66.7%, 26.3% [−3.66% to 56.3%]); and adequate prescriptions of intravenous fluids for severe dehydration (67.2% versus 40.6%; 29.9% [10.9%–48.9%]). The proportion of children receiving inappropriate doses of drugs in intervention hospitals was lower (quinine dose >40 mg/kg/day; 1.0% versus 7.5%; −6.5% [−12.9% to 0.20%]), and inadequate gentamicin dose (2.2% versus 9.0%; −6.8% [−11.9% to −1.6%]).

Conclusions

Specific efforts are needed to improve hospital care in developing countries. A full, multifaceted intervention was associated with greater changes in practice spanning multiple, high mortality conditions in rural Kenyan hospitals than a partial intervention, providing one model for bridging the evidence to practice gap and improving admission care in similar settings.

Trial registration

Current Controlled Trials ISRCTN42996612

Please see later in the article for the Editors' Summary

Editors' Summary

Background

In 2008, nearly 10 million children died in early childhood. Nearly all these deaths were in low- and middle-income countries—half were in Africa. In Kenya, for example, 74 out every 1,000 children born died before they reached their fifth birthday. About half of all childhood (pediatric) deaths in developing countries are caused by pneumonia, diarrhea, and malaria. Deaths from these common diseases could be prevented if all sick children had access to quality health care in the community (“primary” health care provided by health centers, pharmacists, family doctors, and traditional healers) and in district hospitals (“secondary” health care). Unfortunately, primary health care facilities in developing countries often lack essential diagnostic capabilities and drugs, and pediatric hospital care is frequently inadequate with many deaths occurring soon after admission. Consequently, in 1996, as part of global efforts to reduce childhood illnesses and deaths, the World Health Organization (WHO) and the United Nations Children's Fund (UNICEF) introduced the Integrated Management of Childhood Illnesses (IMCI) strategy. This approach to child health focuses on the well-being of the whole child and aims to improve the case management skills of health care staff at all levels, health systems, and family and community health practices.

Why Was This Study Done?

The implementation of IMCI has been evaluated at the primary health care level, but its implementation in district hospitals has not been evaluated. So, for example, interventions designed to encourage the routine use of WHO disease-specific guidelines in rural pediatric hospitals have not been tested. In this cluster randomized trial, the researchers develop and test a multifaceted intervention designed to improve the implementation of treatment guidelines and admission pediatric care in district hospitals in Kenya. In a cluster randomized trial, groups of patients rather than individual patients are randomly assigned to receive alternative interventions and the outcomes in different “clusters” of patients are compared. In this trial, each cluster is a district hospital.

What Did the Researchers Do and Find?

The researchers randomly assigned eight Kenyan district hospitals to the “full” or “control” intervention, interventions that differed in intensity but that both included more strategies to promote implementation of best practice than are usually applied in Kenyan rural hospitals. The full intervention included provision of clinical practice guidelines and training in their use, six-monthly survey-based hospital assessments followed by face-to-face feedback of survey findings, 5.5 days training for health care workers, provision of job aids such as structured pediatric admission records, external supervision, and the identification of a local facilitator to promote guideline use and to provide on-site problem solving. The control intervention included the provision of clinical practice guidelines (without training in their use) and job aids, six-monthly surveys with written feedback, and a 1.5-day lecture-based seminar to explain the guidelines. The researchers compared the implementation of various processes of care (activities of patients and doctors undertaken to ensure delivery of care) in the intervention and control hospitals at baseline and 18 months later. The performance of both groups of hospitals improved during the trial but more markedly in the intervention hospitals than in the control hospitals. At 18 months, the completion of admission assessment tasks and the uptake of guideline-recommended clinical practices were both higher in the intervention hospitals than in the control hospitals. Moreover, a lower proportion of children received inappropriate doses of drugs such as quinine for malaria in the intervention hospitals than in the control hospitals.

What Do These Findings Mean?

These findings show that specific efforts are needed to improve pediatric care in rural Kenya and suggest that interventions that include more approaches to changing clinical practice may be more effective than interventions that include fewer approaches. These findings are limited by certain aspects of the trial design, such as the small number of participating hospitals, and may not be generalizable to other hospitals in Kenya or to hospitals in other developing countries. Thus, although these findings seem to suggest that efforts to implement and scale up improved secondary pediatric health care will need to include more than the production and dissemination of printed materials, further research including trials or evaluation of test programs are necessary before widespread adoption of any multifaceted approach (which will need to be tailored to local conditions and available resources) can be contemplated.

Additional Information

Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001018.

  • WHO provides information on efforts to reduce global child mortality and on Integrated Management of Childhood Illness (IMCI); the WHO pocket book “Hospital care for children contains guidelines for the management of common illnesses with limited resources (available in several languages)
  • UNICEF also provides information on efforts to reduce child mortality and detailed statistics on child mortality
  • The iDOC Africa Web site, which is dedicated to improving the delivery of hospital care for children and newborns in Africa, provides links to the clinical guidelines and other resources used in this study

Introduction

Common illnesses including pneumonia, malaria, and diarrhea remain major contributors to child mortality in low-income countries [1]. Hospital care of severe illnesses may help improve survival, and disease-specific clinical guidelines have been provided by the World Health Organization (WHO) for more than 15 y [2], and as collated texts since 2000 [3],[4]. These guidelines form part of the Integrated Management of Childhood Illnesses (IMCI) approach adopted by over 100 countries. However, in contrast to its primary care aspects [5],[6], implementation of IMCI at district hospitals has not been evaluated. Paediatric hospital care is often inadequate in our setting and also in other low-income countries both in Africa and Asia [7][10], with most inpatient deaths occurring within 48 h of admission [11].

We therefore set out to develop and test a strategy to improve paediatric care in district hospitals in partnership with the Kenyan government [12][14]. We considered a trial of alternative interventions necessary for ethical reasons and because systematic reviews indicated uncertainty in the value of multicomponent interventions [15]. Our evaluation is based on the classical Donabedian approach—assessing structure, process, and valued health system outcome measures [16]. We randomised hospitals, rather than individuals, to intervention groups because the intervention was designed to influence how the paediatric teams provided care. Secondly, the cluster randomised trial offered logistical convenience in implementing certain intervention components, which by their nature (training, feedback, supervision) are easier to administer to groups rather than on an individual basis. To provide data to inform debate on the plausibility of any cause–effect relationship arising from the trial data, we also planned that evaluation spanned a realistic timescale, evaluated possible postintervention deterioration, and assessed intervention context, adequacy, and barriers to implementation [12],[17][20].

Methods

Study Sites and Participants

Eight rural hospitals (H1 to H8) were chosen purposefully from four of Kenya's eight provinces to provide some representation of the variety of rural hospital settings encountered in Kenya (Table 1) [12]. Hospitals admitting a minimum of 1,000 children and conducting at least 1,200 deliveries per year were eligible for inclusion. Prior to the study, medical records documenting admission information were written as nonstandard, free-text notes in all eight hospitals. The Ministry of Health usually aims to disseminate national guidelines aimed at hospital care to facilities through distribution of some print materials and ad hoc or opportunistic workshops or seminars. It had not previously been able to augment this approach with systematic efforts or provide specific supervision to support paediatric hospital care. Further, none of the eight hospitals themselves had explicit procedures for implementing new clinical guidelines.

thumbnail
Table 1. Baseline hospital characteristics and characteristics of 8,205 paediatric admission events at baseline and during the 18-mo intervention period.

https://doi.org/10.1371/journal.pmed.1001018.t001

We collected data from medical records of paediatric admissions aged 2–59 mo to describe paediatric care practices of clinicians and nursing staff targeted by the guidelines, training, and feedback. The Kenya Medical Research Institute National Ethics and Scientific review committees approved the study (Texts S1 and S2).

Randomization and Masking

Prior to inclusion in the study the eight shortlisted hospitals were visited and meetings were held with the hospital management team. At these meetings, the study design, randomization, potential inputs, approach to data collection, and longevity were explained. All hospital management teams subsequently assented to their hospital's participation and randomization after internal discussions. Assent from the hospital's catchment population was not sought. Staff in all hospitals were made aware of the study's overall aims to explore ways to improve care and need for data collection through specific presentations made after randomization at the start of introductory training and using written information sheets. After obtaining the hospitals' assent we allocated eight hospitals (clusters) to a full (intervention group, hospitals H1–H4) or partial (control group, hospitals H5–H8) package of interventions using restricted randomization. Of 70 possible allocations, seven defined two relatively balanced groups (Table 1). These allocations were written on identical pieces of paper, with hospitals represented by codes, and one allocation was randomly selected using a “blind draw” procedure. Participating hospitals and the research team could not be masked to group allocation. However, information on group allocation was not publicly disseminated and the geographic distance between hospitals was large. We therefore do not feel that users of the hospitals were aware of or influenced by the form of intervention allocated to the hospital.

Study Intervention

The intervention delivered over 18 mo (from September 2006 to April 2008) aimed to improve paediatric admission care by promoting hospitals' implementation of best-practice guidelines and local efforts to tackle local organizational constraints. Before the trial commenced, a decision was made to adjust the timing of the primary endpoint for measuring intervention effectiveness, aligning it with the end of this 18-mo active intervention period. As part of this updated approach, monitoring of intervention sites was planned to continue for 12 mo after active intervention had ended. Funds were not available to support comparable extended monitoring in control sites. The intervention components are labeled 1–6 and a–c in Figure 1 [21] and included: (1) setting up a scheme for regular hospital assessment through surveys conducted six monthly, followed by (2) face-to-face feedback of findings in intervention sites, and (a) written feedback in both groups. The other components were: (3) 5.5-d training aimed at 32 health workers of all cadres approximately 6–10 wk after baseline surveys (July to August 2006) in intervention hospitals [13], (b) provision of clinical practice guidelines introduced with training, (c) job aides, (4) an external supervisory process, and (5) identification of a full-time local facilitator (a nurse or diploma-level clinician) responsible for promoting guideline use and on-site problem solving [19]. Supervision visits were approximately two to three monthly, but facilitation remained in place throughout the 18 mo. The package for control sites (H5–H8) included five components (1, 6, a, b, and c): (1) six-monthly surveys with written feedback only, provision of (b) clinical practice guidelines and (c) job aides, and (6) a 1.5-d initial guideline seminar for approximately 40 hospital staff. The design thus compares two alternative intensities of intervention, both providing considerably more than routinely delivered, although we refer to one arm as the “control.”

thumbnail
Figure 1. Graphical depiction of the complex intervention delivered over an 18-mo period (adapted from Perera et al. [21]).

Circles represent activities and squares represent objects; components delivered concurrently appear side by side.

https://doi.org/10.1371/journal.pmed.1001018.g001

One of the job aides, introduced to all sites with all training and continuously supplied to improve documentation of illness, was a paediatric admission record (PAR) form. This was to replace traditional “blank paper” medical notes [22]. All hospitals were aware that their records and patient management were to be regularly evaluated. All job aides, training materials, and assessment tools are available online (http://www.idoc-africa.org/docs/list/cat/5/subcat/27).

Data Collection

Data were collected at baseline and then at six-monthly intervals during six and four surveys in intervention (surveys 1–6) and control hospitals (surveys 1–4), respectively (Figure 1). A single survey took approximately 2 wk with all sites surveyed within a maximum 6-wk consecutive period by employing up to four teams. The survey tools and team training have been described in detail elsewhere [14]. In brief, data were collected using three tools adapted from previous work [7],[8] then extensively pretested: a checklist of structure indicators, patient case-record data abstraction forms, and a structured parent/guardian interview tool. In the case of the parent/guardian interview formal, written consent was obtained prior to data collection with no parent/guardians refusing consent. Ethical approval was granted for confidential abstraction of data from archived case records without individuals' consent. Survey team leaders remained the same throughout the study and teams received 3 wk initial training that included a pilot survey. Data collectors could not be blinded to allocation, but all were guided by standard operating procedures and, for case records, a 10% sample were independently reevaluated by the survey supervisor during each survey. Agreement rates for data abstracted were consistently greater than 95%.

Case records from a random sample of calendar dates from the 6-mo intersurvey periods were selected with the proportion of dates sampled adjusted to yield approximately 400 records based on hospitals' admission rates. On the basis of prior experience we aimed to conduct interviews with 50 caretakers of admitted children during each 2 wk survey (surveys 1–4).

Performance Indicators

Primary effectiveness measures were 14 process indicators measured on paediatric admissions aged 2–59 mo at 18-mo post baseline (survey 4). Secondary measures were four valued system outcomes of admission and changes in structure measured at the hospital level. The trial was not designed to evaluate mortality effects.

Process indicators.

Indicators reflected standards defined by the clinical guidelines focusing on: pneumonia, malaria and diarrhoea, and/or dehydration that account for more than 65% of paediatric admissions and deaths [13]. These span assessment, therapeutic, and supportive care. We defined dichotomous variables for process errors, e.g. wrong intravenous fluid prescription. However, to summarize assessment an aggregate assessment score for each child (range 0–1) was calculated by counting the number of features documented and dividing this by the total relevant for each child according to guidelines (pneumonia 8, malaria and diarrhoea/dehydration both 6). The denominator of the score was thus child specific, depended on the extent of comorbidity, and had a maximum value of 16 due to two shared features of severe illness.

Outcome indicators.

These indicators reflected adherence to key policy recommendations and included vitamin A prescription, identifying missed opportunities for immunization, and universal provider initiated testing and counselling (PITC) for HIV. A fourth was based on a score (range 0–4) reflecting caretakers' correct knowledge, at discharge, of their child's diagnosis and number, duration, and frequency of discharge drugs.

Structure indicators.

The availability of equipment, basic supplies, and service organization were evaluated using a checklist of 113 items needed to provide guideline directed care and representing seven logical groupings [23]. Data were collected by observation and interviewing senior hospital staff. A simple, unweighted proportion of the 113 items was derived, the change in proportion available from survey 1 to survey 4 was calculated for each hospital and the mean change in intervention and control groups compared.

Sample Size

There were 70 district hospitals in Kenya at the time of the study. Hospitals from four of Kenya's eight provinces without potentially confounding, nonstudy interventions and meeting the outlined eligibility criteria were shortlisted. Data on additional criteria felt to help define the range of contexts in Kenya were then evaluated, and eight hospitals from four provinces were purposefully selected to ensure that at least two out of these eight hospitals met each positive and negative criterion (Table 1), that two hospitals were from each of the four provinces and represented logistical implications of their location. The sample size of eight hospitals was estimated using two approaches to compare performance within each hospital (plausibility design) and across the two arms of the trial (cluster RCT analysis). Within hospitals, we estimated that 50% correct performance could be estimated with precision (95% confidence intervals [CIs]) of ±7% with 200 admission records (50% of 400 sampled admissions), or, ±10% with 100 admission records. The second calculation for group (C-RCT) comparisons accounted for the clustered nature of the data. The median intraclass correlation coefficient (ICC) for 46 quality of care variables estimated from a health facility cluster survey in Benin was ρ = 0.2 [24]. We estimated, employing this value for the ICC, that 100 observations per cluster would provide 80% power to detect a 50% or greater difference in proportions between intervention and control arms at 18 mo follow-up [25].

Statistical Analysis

Data were double entered and verified in Microsoft Access and analysed using Stata, version 10 (Stata Corp.) according to the prespecified analysis plan.

Descriptive analysis.

We present characteristics of hospitals at baseline and of children contributing admission data during surveys 1–4. Process and outcome indicators are summarized as percentages and the absolute changes (95% CI) between survey 1 and 4 calculated for each hospital.

Comparison of intervention and control arms.

Two approaches were used. The first approach was a cluster level analysis of mean change from baseline in intervention (n = 4) and control (n = 4) groups, a test of mean difference-in-difference, using an unpaired t-test (with individual sample variances if appropriate), which is reasonably robust to deviations from normality, even for a small number of clusters. The second approach compared the groups at survey 4 using a two-stage method [26]. In the initial stage, logistic or linear regression analyses were conducted for each outcome adjusting for hospital-level covariates (all-cause paediatric mortality, malaria transmission, and size) and gender, illness outcome (alive or died) at the patient-level but not study group. The observed events were then subtracted from predicted events in the regressions to obtain a residual for each cluster. The cluster residuals were then compared in the second stage using a t-test [26].

Performance post intervention period.

Data from intervention hospitals (surveys 4–6) were analysed to determine the impact of intervention withdrawal by assessing trends graphically and using regression analysis. Linear and binomial regression analysis was used to assess whether the means or proportions changed over time; this was done by testing to see whether there was a linear trend associated with the postintervention period (surveys 4–6).

We acknowledge the use of multiple significance tests and report 95% CIs and exact p-values where appropriate noting that p-values lower than those traditionally considered “significant” might be given greater weight. We would, however, suggest consideration of the plausibility of the intervention's effectiveness should also take into account any consistency in effect across indicators.

Results

All hospitals participated in each survey as planned (Figure 2). The intervention's implementation is summarized in Table 1 and showed that intended training for at least 32 workers (the majority were nurses) was attained in three of the four intervention sites. No hospital received additional, nonstudy paediatric training during the study period. Staff turnover, which was of a like-for-like nature, was high in both intervention and control hospitals, especially in the larger hospitals (H3 and H7). At 18 mo only, 5% (2/35) to 13% (3/23) and 0 to 26% (6/23) of frontline clinical staff in the intervention and control hospitals, respectively, had received initial training (Table 1). As part of supervisory activities, the implementation team conducted an additional 10–12-h training session in two intervention hospitals and two to three small group sessions of 2–4 h in all four intervention hospitals over the 18 mo intervention period.

thumbnail
Figure 2. Trial profile.

*Caretaker interviews not conducted in control sites 12 mo after intervention (see Tables S1, S2, S3, S4, S5).

https://doi.org/10.1371/journal.pmed.1001018.g002

Intervention and control sites were similar at baseline (Table 1), although routinely reported prior paediatric mortality varied from 4.1% to 13.4%. Case records for primary process of care indicators were available for 1,130 and 1,005 records at baseline and 1,158 and 1,157 case records at 18 mo for intervention and control hospitals, respectively (Table 1). Additional data summarizing the patient populations at cluster level are provided in Tables S1 and S2.

Primary Effectiveness Measures

Results were similar from both approaches used to compare intervention arms, i.e., adjusted comparison at 18 mo and difference of differences. For brevity, we outline only the results of adjusted comparisons and present other data in Tables S3, S4, S5.

Process indicators.

Of 14 process of care indicators, performance at hospital level for three indicators assessed for every admission were highly variable but often poor at baseline (Table 2): e.g., documentation of weight, 1%–95%, and mean assessment scores 0.26–0.44. In addition, disease-specific treatment practices at baseline were poor, rarely conforming to guideline recommendations (Table 2). For example, prescription of nationally recommended (since 1998) loading dose quinine for <7% appropriate cases in seven sites at baseline (Table 2).

thumbnail
Table 2. Changes in process and outcome indicators between baseline and 18 mo postintervention by hospital.

https://doi.org/10.1371/journal.pmed.1001018.t002

The proportion of admissions treated in line with clinical guidelines was substantially higher in intervention compared to control sites for prescription of twice rather than thrice daily quinine, once rather than thrice daily gentamicin, appropriate quinine and gentamicin dose/kg body weight, and the proportion of severely dehydrated children with correct intravenous fluid volumes (Table 3). There were no differences in proportions receiving possibly toxic gentamicin doses although this practice was relatively uncommon.

thumbnail
Table 3. Average performance in control and intervention hospitals at baseline and 18 mo follow-up and adjusted difference (95% CI) at 18 mo.

https://doi.org/10.1371/journal.pmed.1001018.t003

Secondary Effectiveness Measures

Outcome indicators.

At baseline key child health policy interventions were rarely implemented. Vitamin A was prescribed only in H7 to 27% of admissions (Table 2). Health workers rarely documented missed opportunities for immunization (<9% across six sites) or offered PITC for HIV at baseline (all sites fewer than 4%).

At 18 mo the proportion of children offered PITC for HIV was significantly higher in intervention sites (adjusted difference, 19.4%; 95% CI 12.3%–26.4%), as was checking vaccination status (25.8%; 7.29%–44.4%]). Although, prescription of Vitamin A and counselling improved in some hospitals, differences between groups did not attain statistical significance (Table 3).

Structure indicators.

Changes between baseline and 18 mo were positive in both groups for all domains. Improvements in intervention hospitals were, however, consistently greater than in control hospitals (Figure 3), with the mean difference of difference analysis showing a 21% greater overall improvement (p = 0.02, based on a simple t-test).

thumbnail
Figure 3. Average change from baseline to 18 mo postintervention in proportion of structure items available, for each major domain and combined, for hospitals in the intervention and control groups.

https://doi.org/10.1371/journal.pmed.1001018.g003

Performance within intervention sites during surveys 5 and 6.

For most process indicators with improvement, and based on tests for trend between survey 4 and survey 5 or survey 6, no major decline in performance was noted even 12 mo after withdrawal of intervention and in the face of continuing staff turnover (Figures 4 and S1).

thumbnail
Figure 4. Intervention effect on processes of care.

(a) Documentation of essential clinical signs for malaria, pneumonia, or dehydration; (b) proportion of children receiving loading dose quinine, and outcome of care; (c) the proportion of children eligible for HIV testing offered PITC during survey 1 through survey 6 (baseline to 30 mo follow-up).

https://doi.org/10.1371/journal.pmed.1001018.g004

Discussion

We tested an approach to implementing clinical guidelines for management of illnesses that cause most deaths in children admitted to district hospitals in Kenya. Despite their modest success in developed countries [15], we used a multifaceted approach reasoning that deficiencies in knowledge, skills, motivation, resources, and organization of care would all need to be addressed. The intervention design was guided by experience in the setting [7],[8] and theories of change and culture of practice [13],[15],[27][29]. Our baseline data and other reports [7][10] suggest that the simple availability of authoritative WHO and national guidelines—for periods of more than 15 y—are currently having little impact on hospital care for children. So what did our interventions achieve?

The full intervention package resulted in significantly greater improvements in almost all primary and secondary effectiveness measures. Within specific hospitals performance of certain indicators, e.g., recording child's weight in H3, were already high at baseline. For these specific hospitals there was limited scope for improvement, but there remained significant potential for improvement at the group level since performance for most indicators was below the projected level of 50% at baseline. Substantial, clinically important changes occurred in processes of care despite very high staff turnover amongst the often junior clinicians responsible for much care in each site. Indeed, of 109 clinical staff involved in admitting patients sampled at survey 4 from intervention hospitals only nine (8.3%) had received any specific formal or even ad hoc training. At survey 6 this proportion had reduced to 4.4% (four out of 91) reflecting the typically high turnover of junior clinicians in such settings. As the training and guidelines were not being provided in preservice training institutions and as formal orientation periods are absent [14], we infer, but cannot confirm, that new staff learned correct practices more commonly from established clinicians or the facilitator in intervention hospitals. Improvement in structure indicators occurred without any direct financial inputs reflecting probably a small generalized improvement in resource availability and use of funding from user fees (total hospital incomes varied from US$57 to US$100 per bed per month [19]) that we feel was in part, in response to hospital feedback and the advocacy of the facilitator [14].

Improvements in quality of care thus occurred across a set of common, serious childhood conditions and over a prolonged period. These data are a major addition to reports from sub-Saharan Africa indicating that financial incentives can improve malaria-specific care and fatality [30] and that implementation of WHO guidelines can improve emergency triage assessment and treatment of children [31][33] and hospital care and outcomes for severe malnutrition [34]. They also complement evidence from middle-income settings where a multifaceted intervention resulted in substantial improvements in two key obstetric practices [35]. Our data however, to our knowledge, represent the first major report examining national adaptation and implementation of a broad set of rural hospital care recommendations. They are relevant to many of the 100 countries with IMCI programmes where rural hospitals have important roles supporting primary health care systems [36] and in helping to reduce child mortality [37],[38].

However, while change in simple process indicators was reasonably consistent in intervention sites, in control (partial intervention) sites, changes were more varied, even within hospitals (notably site H8). Certain indicators, e.g., PITC for HIV, also improved only in three of four intervention sites and steadily but slowly. Thus, while the full intervention may promote consistency, there was still substantial evidence of variation across indicators, across sites, and across time. Such variability is consistent with emerging debates drawing on theories of complexity, chaos, and change emphasizing the effect of interactions with contexts [39][41] and suggesting that understanding can be informed by parallel qualitative enquiry [42]. Data collected during this study on barriers to use of guidelines [18] and views on supervision, feedback, and facilitation [14] together with published literature [43] suggest to us that poor or slow uptake may be associated with a requirement for greater personal or organizational effort to change, the view that a task is not directly related to care of the immediate illness, or, in intervention sites, an area unlikely to be subject to local evaluation.

Limitations

Our study has limitations. Hospitals were not selected at random from a set of all eligible hospitals for logistic reasons and, because random selection of a small number of clusters may not have produced balance nor guaranteed representativeness at baseline. Hospitals assented to participation and randomization, but we were not able to engage communities in this process [44], and they and survey teams were aware of intervention allocation. The latter is a potential problem with results based largely on retrospective review of records. The discrepancy between documentation and performance presents a particular threat at baseline before efforts in all sites to improve clinical notes. Prescription data are less susceptible to this limitation however, and improved prescribing paralleled improvement in assessment indicators. Efforts to minimize possible observation bias at the point of data collection included the use of structured inventory forms, standard operating procedures, and extensive training in survey methods. With only four hospitals per group, attempts to adjust for baseline imbalance may also have only limited success. However, to facilitate scrutiny we report on the context of intervention [19],[20], its delivery and adequacy [12], the views of intervention recipients [18], and detailed site-specific data (see Tables S1, S2, S3, S4, S5) and suggest that all are considered for a complete interpretation of this study of a complex intervention.

Replication and Scaling Up

Demonstrations that a similar intervention package is effective in other settings would strengthen the evidence supporting widespread adoption. While there are few studies of this nature reported, we note the recently reported success of multifaceted interventions in middle- and high-income countries [35],[45]. However, standardizing complex interventions may be difficult, if not impossible, given the important role of context in shaping mechanisms and outcomes [46]. For this reason, future reports will attempt to provide detailed insight into how and why this intervention met with general but varying degrees of success. If our results are deemed credible, however, the data we present have a number of implications. Firstly, current efforts to implement and scale up improved referral care in low-income settings need to go beyond the existing tradition of producing and disseminating printed materials even when linked to training [15]. Instead broader health system efforts, guided by current understanding of local contexts and capabilities and theories of change, are required.

Within Kenya it would obviously be a mistake to consider that the intervention package tested can be scaled up simply by aiming for much broader coverage with the training course we designed. Effectiveness has been demonstrated only for the multifaceted intervention. Thus, scaling up should aim to provide all inputs not just guidelines, job aides, and introductory training. However, providing regular support supervision and performance feedback related to child and newborn care at first referral level are not routine. Resources and systems for supervision need strengthening and supervisors themselves will need training and organizing. Routine information systems are inadequate to generate the data required to evaluate care, and capacity for conducting and disseminating analyses as part of routine feedback is largely absent. The role of facilitators is also not one that currently exists. Although the roles required could perhaps be played by senior departmental staff, the lack of human resources means such tasks cannot simply be added to already busy jobs [19]. Furthermore the skills or desire to facilitate change are not necessarily present amongst such mid-level managers.

Countries other than Kenya considering adopting the approach may have similar limitations. In addition they may need to tailor some intervention components to their particular setting. For example, the detail of a clinical guideline or job aide or approach to training may need to reflect available resources or local evidence. However, such adaptation would need to be complemented by careful consideration of how systems can be made ready to support implementation of new practices and improved quality of care. We would suggest this includes due attention to influencing the institutional culture and context of rural hospitals although willingness to invest in more integrated approaches often seems lacking [47]. Finally, before making decisions on implementation, policy makers increasingly require carefully collected and reported cost-effectiveness data. Such a report is in preparation. Considering only the financial costs of specific inputs, for example the typical 5-d training course for 32 participants at approximately US$5,000 [13] or the annual cost of a facilitator at less than US$5,000 [18], while of some value, are insufficient for prioritizing resource use.

Conclusion

Our findings provide strong evidence that a multifaceted intervention can improve use of guidelines and, more generally, the quality of paediatric care. Cost data will help determine whether this implementation model warrants wider consideration as one approach to strengthening health systems in low-income settings.

Supporting Information

Figure S1.

Effect of intervention on the processes and outcome of care within each hospital during survey 1 through survey 6 (baseline to 30 mo follow-up).

https://doi.org/10.1371/journal.pmed.1001018.s001

(TIF)

Table S1.

Demographic characteristics of all 8,205 children aged 2–59 mo by hospital and survey.

https://doi.org/10.1371/journal.pmed.1001018.s002

(XLS)

Table S2.

The main diagnoses among all 8,205 study participants aged 2–59 mo by hospital during each survey.

https://doi.org/10.1371/journal.pmed.1001018.s003

(XLS)

Table S3.

Difference-in-difference analysis of intervention effect on process and outcome measures of quality of care.

https://doi.org/10.1371/journal.pmed.1001018.s004

(XLS)

Table S4.

Changes for process indicators by hospital during each survey.

https://doi.org/10.1371/journal.pmed.1001018.s005

(XLS)

Table S5.

Changes for outcome indicators by hospital during each survey.

https://doi.org/10.1371/journal.pmed.1001018.s006

(XLS)

Acknowledgments

The authors are grateful to the staff of all the hospitals included in the study and colleagues from the Ministry of Public Health and Sanitation, the Ministry of Medical Services, and the KEMRI/Wellcome Trust Programme for their assistance in the conduct of this study. In addition we are grateful for the input of Martin Weber, Alexander K. Rowe, Lucy Gilson, R.W. Snow, Kara Hanson, Bernhards Ogutu, and Fabian Esamai in the initial stages of this work. John Wachira, Violet Aswa, and Thomas Ngwiri helped develop and implement the training, ETAT+. Our thanks go to Jim Todd, Elizabeth Allen, and Tansy Edwards for advice on analyses and comments on the manuscript. The work of the hospital facilitators A. Nyimbaye, J. Onyinkwa, M. Kionero, and S. Chirchir is also acknowledged and this report is dedicated to M. Kionero who tragically died shortly after the study. This work is published with the permission of the Director of KEMRI.

Author Contributions

ICMJE criteria for authorship read and met: P Ayieko, S Ntoburi, J Wagai, C Opondo, N Opiyo, S Migiro, A Wamae, W Mogoa, F Were, A Wasunna, G Fegan, G Irimu, M English. Agree with the results and conclusions: P Ayieko, S Ntoburi, J Wagai, C Opondo, N Opiyo, S Migiro, A Wamae, W Mogoa, F Were, A Wasunna, G Fegan, G Irimu, M English. Conceived and designed the experiments: A Wamae, F Were, A Wassuna, M English. Analyzed the data: P Ayieko, C Opondo, G Fegan, M English. Wrote the first draft: P Ayieko, M English. Wrote the paper: P Ayieko, M English. Obtained the funding for this project: M English. Provided and coordinated training and supervision: S Ntoburi, J Wagai, G Irimu, M English. Responsible for surveys, and analyses conducted to inform feedback: P Aiyeko, S Ntoburi, C Opundo, N Opiyo, J Wagai, G Irimu, M English.

References

  1. 1. Bryce J, Boschi-Pinto C, Shibuya K, Black R (2005) WHO Child Health Epidemiology Reference Group. WHO estimates of the causes of death in children. Lancet 365: 1147–1152.
  2. 2. World Health Organisation (1990) Acute respiratory infections in children: case management in small hospitals in developing countries: a manual for doctors and other senior health workers. Geneva: WHO.
  3. 3. World Health Organisation (2000) Management of the child with a serious infection or severe malnutrition. Guidelines for care at first-referral level in developing countries. Geneva: WHO.
  4. 4. World Health Organisation (2005) Pocket book of hospital care for children: guidelines for the management of common illnesses with limited resources. Geneva: WHO.
  5. 5. Armstrong Schellenberg J, Bryce J, de Savigny D, Lambrechts T, Mbuya C, et al. (2004) The effect of Integrated Management of Childhood Illness on observed quality of care of under-fives in rural Tanzania. Health Policy Plan 19: 1–10.
  6. 6. Pariyo G, Gouws E, Bryce J, Burnham G (2005) Improving facility-based care for sick children in Uganda: training is not enough. Health Policy Plan 20: i58–i68.
  7. 7. English M, Esamai F, Wasunna A, Were F, Ogutu B, et al. (2004) Assessment of inpatient paediatric care in first referral level hospitals in 13 districts in Kenya. Lancet 363: 1948–1953.
  8. 8. English M, Esamai F, Wasunna A, Were F, Ogutu B, et al. (2004) Delivery of paediatric care at the first-referral level in Kenya. Lancet 364: 1622–1629.
  9. 9. Nolan T, Angos P, Cunha A, Muhe L, Qazi S, et al. (2001) Quality of hospital care for seriously ill children in less-developed countries. Lancet 357: 106–110.
  10. 10. Reyburn H, Mwakasungula E, Chonya S, Mtei F, Bygbjerg I, et al. (2008) Clinical assessment and treatment in paediatric wards in the north-east of the United Republic of Tanzania. Bull World Health Organ 86: 132–139.
  11. 11. Berkley JLB, Mwangi I, Williams T, Bauni E, Mwarumba S, et al. (2005) Community acquired bacteremia amongst children admitted to a rural district hospital in Kenya. N Engl J Med 352: 39–47.
  12. 12. English M, Irimu G, Wamae A, Were F, Wasunna A, et al. (2008) Health systems research in a low-income country: easier said than done. Arch Dis Child 93: 540–544.
  13. 13. Irimu G, Wamae A, Wasunna A, Were F, Ntoburi S, et al. (2008) Developing and introducing evidence based clinical practice guidelines for serious illness in Kenya. Arch Dis Child 93: 799–804.
  14. 14. Nzinga J, Ntoburi S, Wagai J, Mbindyo P, Mbaabu L, et al. (2009) Implementation experience during an eighteen month intervention to improve paediatric and newborn care in Kenyan district hospitals. Implement Sci 4: 45.
  15. 15. Grimshaw J, Thomas R, MacLennan G, Fraser C, Ramsay C, et al. (2004) Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 8: 1–72.
  16. 16. Donabedian A (1988) The quality of care: how can it be assessed? JAMA 260: 1743–1748.
  17. 17. Habicht J, Victora C, Vaughan J (1999) Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Int J Epidemiol 28: 10–18.
  18. 18. Nzinga J, Mbindyo P, Mbaabu L, Warira A, English M (2009) Documenting the experiences of health workers expected to implement guidelines during an intervention study in Kenyan hospitals. Implement Sci 4: 44.
  19. 19. English M, Ntoburi S, Wagai J, Mbindyo P, Opiyo N, et al. (2009) An intervention to improve paediatric and newborn care in Kenyan district hospitals: understanding the context. Implement Sci 23: 42.
  20. 20. Mbindyo P, Gilson L, Blaauw D, English M (2009) Contextual influences on health worker motivation in district hospitals in Kenya. Implement Sci 4: 43.
  21. 21. Perera R, Heneghan C, Yudkin P (2007) Graphical method for depicting randomised trials of complex interventions. BMJ 334: 127–129.
  22. 22. Mwakyusa S, Wamae A, Wasunna A, Were F, Esamai F, et al. (2006) Implementation of a structured paediatric admission record for district hospitals in Kenya–results of a pilot study. BMC Int Health Hum Rights 6:
  23. 23. Opondo C, Ntoburi S, Wagai J, Wafula J, Wasunna A, et al. (2009) Are hospitals prepared to support newborn survival? - An evaluation of eight first-referral level hospitals in Kenya. Trop Med Int Health 14: 1165–1172.
  24. 24. Rowe A, Lama M, Onikpo F, Deming M (2002) Design effects and intraclass correlation coefficients from a health facility cluster survey in Benin. Int J Qual Health Care 14: 521–523.
  25. 25. Hayes R, Bennett S (1999) Simple sample size calculation for cluster-randomized trials. Int J Epidemiol 28: 319–326.
  26. 26. Hayes R, Moulton L (2009) Cluster randomised trials. London: Chapman & Hall/CRC.
  27. 27. English M (2005) Child survival: district hospitals and paediatricians. Am J Public Health Arch Dis Child. 9 p.
  28. 28. Ferlie E, Shortell S (2001) Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q 79: 281–315.
  29. 29. Michie S, Johnston M, Abraham C, Lawton R, Parker D, et al. (2005) “Psychological Theory” Group. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care 14: 26–33.
  30. 30. Biai S, Rodrigues A, Gomes M, Ribeiro I, Sodemann M, et al. (2007) Reduced in-hospital mortality after improved management of children under 5 years admitted to hospital with malaria: randomised trial. BMJ 335: Epub.862.
  31. 31. Gove S, Tamburlini G, Molyneux E, Whitesell P, Campbell H (1999) Development and technical basis of simplified guidelines for emergency triage assessment and treatment in developing countries. WHO Integrated Management of Childhood Illness (IMCI) Referral Care Project. Arch Dis Child 81: 473–477.
  32. 32. Molyneux E (2001) Paediatric emergency care in developing countries. Lancet 357: 86–87.
  33. 33. Tamburlini G, Di Mario S, Maggi R, Vilarim J, Gove S (1999) Evaluation of guidelines for emergency triage assessment and treatment in developing countries. Arch Dis Child 81: 478–482.
  34. 34. Ashworth A, Chopra M, McCoy D, Sanders D, Jackson D, et al. (2004) WHO guidelines for management of severe malnutrition in rural South African hospitals: effect on case fatality and the influence of operational factors. Lancet 363: 1110–1115.
  35. 35. Althabe F, Buekens P, Bergel E, Belizán J, Campbell M, et al. (2008) A behavioral intervention to improve obstetrical care. N Engl J Med 68: 48–54.
  36. 36. World Health Organization (1992) The hospital in rural and urban districts: report of a WHO study group on the functions of hospitals at the first referral level. Geneva: WHO.
  37. 37. Darmstadt G, Bhutta Z, Cousens S, Adam T, Walker N, et al. (2005) Evidence-based, cost-effective interventions: how many newborn babies can we save? Lancet 365: 977–988.
  38. 38. Jones G, Steketee R, Black R, Bhutta Z, Morris S (2003) How many child deaths can we prevent this year? Lancet 362: 65–71.
  39. 39. Litaker D, Tomolo A, Liberatore V, Stange K, Aron D (2006) Using complexity theory to build interventions that improve health care delivery in primary care. J Gen Intern Med Suppl 2S30–S34.
  40. 40. Rhydderch M, Elwyn G, Marshall M, Grol R (2004) Organisational change theory and the use of indicators in general practice. Qual Saf Health Care 13: 213–217.
  41. 41. Rickles D, Hawe P, Shiell A (2007) A simple guide to chaos and complexity. J Epidemiol Community Health 61: 933–937.
  42. 42. Lewin S, Glenton C, Oxman A (2009) Use of qualitative methods alongside randomised controlled trials of complex healthcare interventions: methodological study. BMJ 339: b3496.
  43. 43. Chandler C, Jones C, Boniface G, Juma K, Reyburn H, et al. (2008) Guidelines and mindlines: why do clinical staff over-diagnose malaria in Tanzania? A qualitative study. Malar J 7.
  44. 44. Osrin D, Azad K, Fernandez A, Manandhar D, Mwansambo C, et al. (2009) Ethical challenges in cluster randomized controlled trials: experiences from public health interventions in Africa and Asia. Bull World Health Organ 87: 772–779.
  45. 45. Scales D, Dainty K, Hales B, Pinto R, Fowler R, et al. (2011) A multifaceted intervention for quality improvement in a network of intensive care units: a cluster randomized trial. JAMA 305: 363–372.
  46. 46. Mackenzie M, O'Donnell C, Halliday E, Sridharan S, Platt S (2010) Do health improvement programmes fit with MRC guidance on evaluating complex interventions? BMJ 340: c185.
  47. 47. English M, Wamae A, Nyamai R, Bevins B, Irimu G (2011) Implementing locally appropriate guidelines and training to improve care of serious illness in Kenyan hospitals: a story of scaling-up (and down and left and right). Arch Dis Child 96: 285–290.