Article Text

Download PDFPDF

Evidence based medicine and evaluation of mental health services: methodological issues and future directions
Free

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It is becoming increasingly evident that paediatricians and general practitioners play a key role in assessing and treating children with mental health problems. A recent national survey in England and Wales found that paediatricians are probably treating more emotional and behavioural disorders in children and young people than any other single professional group and that, on average, this group of patients comprises one fifth of their referrals.1 In one large survey in the UK, almost one quarter of 7–12 year olds from a large urban area visiting their general practitioner had a psychiatric disorder,2 and more than half of the children attending a child mental health clinic in a six month period had been referred by their general practitioner.3 In the USA, a large study of 7–11 year old children referred to paediatric clinics found that almost one quarter had a diagnosable mental health disorder, with a further 42% showing threshold disorders.4

The particular role of general practitioners and paediatricians has been highlighted in the recent NHS Health Advisory Service report “Together we stand”,5 which resulted from a thematic review into child and adolescent mental health services (CAMHS) in England and Wales. The importance of closer working relationships between practitioners of a wide variety of disciplines is emphasised and a strategic and tiered approach to commissioning and delivering child and adolescent mental health services is presented, albeit without supporting evidence on how best to achieve this. In the four tier framework, tier 1 includes professionals such as general practitioners, social workers based in the community, school nurses, and health visitors, who usually make the first contact with children and their families. Tiers 2, 3, and 4 consist of specialist child and adolescent mental health professionals working in increasingly restrictive treatment settings through to inpatient provision. The importance of identifying, supporting, and strengthening professionals in the first tier, responsible for making preliminary formulations of the most appropriate treatments for many of these children, is stressed in “Together we stand”. However, a regional survey found little evidence of protocols, such as referral guidelines, being used across tiers; it also found that general practitioners felt overwhelmed in dealing with child mental health difficulties.6

EVIDENCE ABOUT THE EFFECTIVENESS OF TREATMENTS IS NEEDED

General practitioners and paediatricians, who may not have substantial training in child mental health, will look in part to published work when planning treatment or referral. There is increased recognition that clinical practice benefits from research evidence of the most effective treatments.7-9 However, the strength of evidence based clinical decisions will depend on the quality and relevance of the evidence and on its interpretation.

Research on the effectiveness of treatments for children has lagged behind that of treatment for adults and has tended to focus more on pharmacological, cognitive, and behavioural treatments than psychodynamic methods or work with families.10 For example, the effectiveness of behaviour modification for children11 and the treatment of well defined conditions such as enuresis12 13 has been demonstrated and a great deal of information has been amassed on pharmacological methods for the treatment of attention deficit hyperactivity disorder.14 15 However, the indications and contraindications for particular forms of child psychiatric treatment are often uncertain.16

Questions have been raised about the relation between carefully controlled trials of specific treatments and the provision of child mental health services in the community. Although the benefits of many clinical interventions have been demonstrated in controlled clinical trials, small to negligible effects tend to be found when they are tested in “real world” settings.17 In the real world, children present with complex difficulties and they may need a range of integrated services that change over time. Good science requires treatment to be uniform, well documented, explicit, and logical; however, good clinical treatment is individualised for the child and family, compatible with the clinician’s style, intuitive, and attentive to the relationship with the child and family.14 18

THE POLITICS OF EVIDENCE BASED MEDICINE

Because decisions to allocate resources and make policy decisions are increasingly informed by research evidence, it is timely to deliberate on the implications of evidence based services and to emphasise the importance of evaluation within a framework that allows for the assessment of a broad range of interventions. The meaning of evidence based practice is sometimes incorrectly interpreted to mean that only evidence from randomised controlled trials (RCTs) is acceptable in determining appropriate treatment methods.19Although experts in evidence based medicine agree that this is not the case,20 there is a need for increased debate about the role of different types of evidence and the ways in which evidence from different kinds of studies should be synthesised in reviews.

Evaluative research is not just a technical activity but an inherently political activity conducted within contexts where many parties have a stake in the outcome.21 In every situation there will be a variety of competing interests and pressures to “misuse” or change the evaluation results.22 One has only to read papers discussing the Fort Bragg study to note the amount of dissent that can arise from an evaluation of an alternative way to organise child mental health services.19 23 24 Thus far, the main conclusion of the Fort Bragg evaluation has been that there needs to be a stage between evaluating specific treatments in clinical trials and evaluating their integration into a system.25-27 Each separate component needs first to be evaluated in the real world. However, the ramifications of the study’s failure to identify clinical differences between the reorganised system of care and more traditional, fragmented services have been noted by policy makers, with the suggestion that clinicians are loathe to acknowledge the possibility that they may not be providing effective treatments.28

There will be different levels of interest in, and commitment to, evaluation research. It may seem that scientists are single minded in their efforts to create “pure” designs and to measure outcomes precisely; sometimes potentially to the detriment of clinician–client relations and generalisability of the results. In contrast, policy makers or funding authorities want to know about services in context: about what works for their particular constituency. The Cochrane collaboration model of compiling systematic reviews of evidence argues that, when they exist, most weight should be given to carefully controlled trials,29 but this approach is inclined to provide answers to questions that are easily addressed with existing research methodologies. It does not necessarily address all the challenges posed by clinical needs. Account should also be taken of factors such as the innovation of a treatment or intervention, the cultural context of service provision, and the generalisability of the treatment offered during a controlled trial.

Policy makers in the USA have suggested that the pursuit of elegant methodology has at times been detrimental to intervention programmes for disadvantaged children and families and cultural minorities.30 Depleted communities require comprehensive, multifaceted strategies to enable young people to move out of poverty and despair, while evaluators have tended to design programmes with one or two specific outcomes in the interests of science. Such an approach may generate scientifically valid recommendations that are, nevertheless, of limited value in clinical practice. Pragmatism may require the adaptation of the dictates of scientific rigour. It is necessary for providers of care to be willing to work closely with researchers to help discover which interventions are effective in the real world,25 to formulate important (not just easy) questions, and to develop relevant research methodologies that will enable investigators to answer those important questions.

ECOLOGICAL VALIDITY OF THE RANDOMISED TRIAL

To demonstrate that interventions are useful in the real rather than the “experimental” world, it is necessary to demonstrate ecological (sometimes called external) validity: the extent to which experimental findings can be generalised across different persons, settings, and times.31 Although there have been repeated calls for RCT studies to assess child mental health services,32 33 there are many obstacles to conducting randomised experiments of any kind in real life settings, even when a treatment is clearly defined and well established.31Although in theory these might be very valuable, a number of specific problems are likely. Many treatment trials have been targeted at potentially biased clinical samples. The caregiver’s, rather than the child’s, needs are often the primary determinants of seeking psychiatric services, and many more children with similar levels of difficulty are not included in those samples.10 Clinicians may, with the best intentions, bias samples by resisting inclusion in scientific trials if they feel that the child or family is particularly vulnerable; the actual treatment might not accurately reflect stated goals; and an inability to ensure blindness of clinicians may create conditions where compensation with control subjects lessens the difference between groups.10 18 34 Consent for trials is usually obtained from the parent or guardian, and they might be less willing to consent to a random choice of treatment for their child than they would be for themselves, and they might be concerned about ethical problems in withholding treatment for disturbed children.

Evaluation of outcomes is complicated by a number of other factors. Treatments are often assessed by looking at changes in symptoms, but the range and quality of instruments available for assessing psychiatric disorders is not sufficient, particularly for younger children.35 Informants (for example, parent, child, and teacher) often differ about the extent of a young person’s problems.36 Child psychiatric disorders change in their manifestation, but measurement of the duration or intensity of symptoms is usually unreliable,37 38 and improvement in one symptom needs to be assessed in the context of a full symptom picture, with co-morbidity the rule rather than the exception.39Finally, while consumers of mental health services, children, and their families may have different views on the effectiveness of treatments from those of the clinicians, their views are not often sought except to describe symptoms. Commenting on the narrowness of the existing diagnostic systems and the sterility of “purely objective approaches”, the editor of the American Journal of Psychiatry recently called for physicians to become more involved in child psychiatry research and to emphasise “how our patients feel and think and change subjectively”.40

There are a variety of alternative experimental designs to the RCT.33 41 Quasi-experimental designs and case studies may be more effectively introduced into routine clinical practice, and the resulting information may be more valid, given the complex ethical issues inherent in randomisation of children or families in crisis to treatment or control groups. It has been emphasised that the question asked should dictate the nature of the evidence sought rather than habit or protocol.20 Acceptance of alternative research strategies and documentation of difficulties in implementing RCT studies, which provide important information about clinical dilemmas, may be an important way of advancing child mental health service delivery and may also help in designing better randomised controlled trials in future.

Some possible ways forward

USE QUALITATIVE AND QUANTITATIVE MEASURES

The inclusion of qualitative measures in all kinds of medical research has been promoted as a way to close the gap between the sciences of discovery and implementation.42-44 It has been suggested that they are particularly appropriate for children and families, who should be treated as partners in the evaluation process.45 46 Families may have important questions about the ways in which they can or should have input into treatment plans, or about how decisions are made concerning the selection of subjects for specific programmes. When evaluation is planned without sufficiently involving families in discussion, meaningful aspects may be missed, such as acknowledging the relevance of time taken travelling, or the cultural sensitivity of the staff to a family’s beliefs and customs. Open ended methods, such as focus groups or semi-structured interviews, will allow families to raise questions that might not otherwise have been contemplated, and should facilitate the discourse of answering those questions.

Quantitative information, such as symptom levels, may need to be interpreted in the context of qualitative comments to understand discrepancies between informants, particularly for internalising problems such as depression and anxiety.37 A further issue relevant to the interpretation of symptoms as outcome indicators is that increases or decreases in particular symptoms can be associated with developmental factors. For example, girls who show high levels of disruptive behaviour before puberty might show less in adolescence, but are at risk for depression or anxiety problems, while the reverse is the case for some boys; solitary, anxious young boys may show less anxiety but more aggression or other antisocial behaviour as they mature.47 48 It is essential to know about the context in which symptoms develop and the ways in which a young person’s life may be influenced both by their difficulties and any clinical interventions. Open ended, qualitative interviews are likely to be an effective way of revealing this kind of information.

New ways should be developed of summarising qualitative studies, akin to structured abstracts, systematic reviews, and meta-analyses.

DESIGN FORMATIVE, PROCESS, AND SUMMATIVE EVALUATIONS

The changing nature of service delivery makes questions about process and user satisfaction as important as those about treatment effectiveness.49 Evaluation should be built into new and existing services, looking at both outcomes and the formative aspects of the service, the process of achieving the service in practice. One of the most important aspects of designing an evaluation strategy is to encourage all those involved in the service (funders, administrators, service providers, and users) to expect a balance between summative and formative evaluation.22 It can be tempting to call for a single, summative conclusion, but to overlook the need for information concerning how a programme or an intervention actually worked or could be modified (for example, what steps were necessary to introduce a multiagency decision making team).

Formative evaluation is sometimes subsumed under auditing. While information about who comes to a service, who drops out, and levels of compliance are essential, there are many ways that these statistics can be elucidated to find out why, for instance, self referred clients respond better to one treatment approach than another, or why a new model of the mental health team leads to poorer client satisfaction. This strategy for evaluation is being adopted in public health services—for example, with a teenage pregnancy prevention programme,50 and could usefully be applied to both prevention and treatment of child mental health problems.

EVALUATE SYSTEMS OF SERVICE PROVISION

Ecological theories of development take account of different levels of influence including the individual, family, school, local neighbourhood, a particular cultural group, or the society.51 They can be applied usefully to mental health service evaluations. Most research has examined the effects of treatment on individual children with specific disorders. Only rarely (for example, in the now notorious Fort Bragg study)25does evaluation consider the effect of changing the characteristics of the system of care. Yet treatment of children with emotional and behavioural symptoms usually requires a complex multisystem strategy, with collaboration between a number of professionals. Several administrative bodies might be involved, including hospital based mental health services, special education, social services, the courts or prisons, and community health. A number of studies in the USA52-54 are currently addressing the interplay between components of a system of care, following the formation of the National Institute of Mental Health Child and Adolescent Service System Program (CASSP), an initiative to promote coordinated and comprehensive services.55

As evaluation is placed on the agenda in the UK, the task of studying systems should be highlighted so that it can be developed in conjunction with studies of programmes or specific treatments. Before any planning for the evaluation of specific services, surveys, key informant interviews, and focus groups should be conducted in a community to assess needs and to evaluate service implementation at the system level.

USE A DEVELOPMENTAL MODEL TO DESIGN EVALUATIONS

Jacobs56 has provided a useful five stage model for linking the type of evaluation conducted and the kinds of data collected with the particular stage or developmental level that the treatment or service has reached. The first level of (formative) evaluation— pre-implementation—occurs before any treatment is provided, and documents the need for a service by providing baseline data. Recent epidemiological information may be sufficient, but a community (such as a school system or an inner city housing development) might require statistics that are more representative of local circumstances. Needs for service (including the public and personal costs of no service) may also be important, garnered by interviews with community leaders, agency heads, or special interest groups. Needs assessment should occur in conjunction with hypothesis generation and hypothesis testing experimental work to develop and trial a treatment to “establish” it. Thus, the first observational and qualitative study at this level is followed by an intermediary stage before going on to the second stage, which is again observational/qualitative research.

The second (process) stage—the accountability of a treatment or service—is necessary to document its use and the extent to which it has reached those in need: to justify continued and perhaps increased expenditure. Accountability can be shown by the kind of information that is collected routinely for audit purposes, such as client numbers and characteristics, but additionally may involve interviews with clients and families indicating needs and responses, and interviews with therapists or service managers. Bradford57 conducted this style of evaluation as a means of assessing multi-agency consultation teams in Kent, created in response to the 1989 Children Act’s call for greater integration of services. He documented the implementation process from a small number of case reports, creating flow diagrams of team assessment procedures and implementation of agreed goals. Despite the theoretical model of agencies having equal input, he recommended that it was essential to designate a lead agency in the team meeting so that managers would sanction expenditure.

The third level of a treatment’s development—clarification—is reached when a new treatment or system change has been in place for a short time but still requires some fine tuning. Types of relevant qualitative data include details of staff meetings, interviews with staff, observation by staff of programme activities and staff processes, interviews with parents on desired benefits, and satisfaction questionnaires. A number of authors have pointed to the absence of evaluations that take into account the views of the families using services and that also include positive outcomes in addition to reduction of symptoms.46

Bradford’s evaluation used client and referrer satisfaction questionnaires as a third stage in the evaluation process.57 A further example at this level is the evaluation of a structured method for facilitating treatment plans by multidisciplinary teams—focal inpatient treatment planning (FITP).58 It was being used in two locations, a hospital inpatient unit and a community based agency working with substance abusing parents.59 Team meetings were observed and a number of staff members were interviewed. Observations identified ways that FITP facilitated identification of the unique aspects of each case, and interviews revealed that it empowered clients by the identification of attainable goals and an emphasis on families’ strengths. It was particularly helpful in the community setting, where staff had more diversity in their clinical experiences.

At the fourth level of treatment development—progress toward objectives—process evaluation can clarify changes necessary for continued improvement in a service, but it is also necessary to demonstrate effectiveness through quantitative outcome evaluation. Standardised symptom scores are likely to be the most useful outcomes and the most realistic design may be quasi-experimental (for example, pretest and post-test with no comparison group; waiting list comparison). Client and family satisfaction questionnaries and evidence of support or resistance to using the treatment will also be important information at this stage, as will cost effectiveness.

The Fort Bragg study provides an excellent example of evaluation at this level.25 The design was quasi-experimental, a comprehensive and coordinated system of child and adolescent mental health services was introduced in one army base in accordance with the recommendations of the US Child and Adolescent Service System Program.55 The comparison was made with a matched sample of children from a second military base, which had not reorganised its services. Process measures included descriptions of intake assessments and case management in terms of system coordination and fragmentation, and outcomes were child mental health symptoms, child competencies, family environment, client satisfaction, and cost of providing the service (for each child served and for each eligible child). The evaluation found that model of care was implemented accurately and was judged to be of good quality. The demonstration site served more children, more quickly, for longer, and with fewer dropouts, but costs for each child seen and for each eligible child were higher. There were, however, no differences between sites in mental health outcomes, which led to questions about implementing such a large scale revision of service provision in one step,25 and questions about the validity of the notion that coordinated services lead to better outcomes.23 24

Once a treatment or service is shown to be effective, the tendency is to devote available resources to providing the service rather than making additional refinements.60 To remedy this, when desired outcomes have been demonstrated then Jacobs’ fifth level of evaluation—service impact—can be implemented, classically evaluated through experimental, quantitative research methodologies, including RCTs, but increasingly augmented by qualitative research that is conducted alongside the experimental research.56 This level of evaluation can start to identify which treatments are most effective, under which conditions, for which children, at which developmental level, under what environmental conditions, and with what concomitant parental, familial, or environmental interventions.61 Not all these questions can be answered in one study. The more common model is for a large trial of a complete treatment package, such as the Infant Health and Development Programme’s randomised, multisite design to evaluate a service for families with newborn premature infants, which comprised home visiting, specialised child development centres, and parent groups to provide social support.62 This is akin to health services research—that is, “process” versus individual studies of component elements such as diagnostic tests, treatment, and prognosis. The development of guidelines for “best practice” in design and execution of these study methods might be a useful endeavour, equivalent to developing “user guides” for quantitative research methods.

Conclusions

Advocates of the evidence based medicine approach recommend the use of many different sources of evidence using methodologies appropriate to the question being asked.20 There is a need for more empirical evidence about the outcome of child and adolescent mental health treatment. Nevertheless, the complexity of children’s lives calls for imaginative approaches to research. Researchers and clinicians should be able to draw from a range of different kinds of evaluation to make conclusions about treatment. Traditional randomised controlled trials of treatments for specific disorders are crucial, but if the full picture is to be revealed, other strategies are needed. Several recommendations have been made.

First, in conjunction with quantitative research, the qualitative experiences of children and families, and those providing a treatment or service need to be examined. A creative mix of measures with an approach that incorporates strengths of children, families, and clinicians could lessen the tension between internal and external validity. In this way, clinical practice and policy can be developed in a manner that takes account of the multiple and varied influences on children’s health and development. Second, studies need to address the way in which a treatment or service becomes established in addition to clinical outcomes and cost. The sharp division between audit and evaluation needs to be lessened, so that evaluation can be used constructively in a dynamic process, to develop new services, refine existing ones, and allow clinicians the opportunity to participate in the collection, documentation, and utilisation of evidence, based on their own clinical practice.

Third, much of the focus of previous research has been on the effectiveness of specific treatments for specific disorders. In view of the recommendations for changes in the organisation of mental health services, both in the USA and the UK, more attention needs to be given to evaluating systems of service provision. Finally, a developmental model for designing evaluation studies is presented that should be helpful in conceptualising what methods or measures are the most appropriate and how to design an “organic” research model, responsive to all those involved in providing and using services.

Acknowledgments

We are grateful to Dr S Kraemer for helpful advice on an earlier draft.

References