Article Text

Download PDFPDF

A glossary for evidence based public health
  1. Lucie Rychetnik1,
  2. Penelope Hawe2,
  3. Elizabeth Waters3,
  4. Alexandra Barratt4,
  5. Michael Frommer1
  1. 1Sydney Health Projects Group, School of Public Health, University of Sydney, Australia
  2. 2Alberta Heritage Foundation for Medical Research, Department of Community Health Sciences, University of Calgary, Canada and School of Public Health, LaTrobe University, Victoria, Australia
  3. 3Centre for Community Child Health, University of Melbourne, Murdoch Children’s Research Institute, Victoria, Australia, and Cochrane Health Promotion and Public Health Field
  4. 4Screening and Test Evaluation Program, School of Public Health, University of Sydney, Australia
  1. Correspondence to:
 Dr L Rychetnik
 Sydney Health Projects Group, School of Public Health, University of Sydney, Victor Coppleson Building (D02), NSW 2006, Australia; luciermed.usyd.edu.au

Abstract

This glossary seeks to define and explain some of the main concepts underpinning evidence based public health. It draws on the published literature, experience gained over several years analysis of the topic, and discussions with public health colleagues, including researchers, practitioners, policy makers, and students.

  • glossary
  • evidence based

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

An important aspect of current debate about evidence based public health is that interested parties interpret and apply the terms “evidence” and “evidence based” in different ways. We have sought to illustrate some of this diversity by including terms that are sometimes used to critique evidence based practice. The majority of terms however, are those commonly used within the field; particularly those underpinning the principles and methods of evidence based public health. A few of the terms are comparatively new to this literature, and have been included to identify important emerging concepts. Our own premise to this glossary is reflected in the definitions of “public health” and “evidence based public health” that are presented at the beginning.*

EVIDENCE

In the broadest sense, evidence can be defined as “facts or testimony in support of a conclusion, statement or belief” and “something serving as proof”.1 Such a generic definition is a useful starting point, but it is devoid of context and does not specify what counts as evidence, when, and for whom.

PUBLIC HEALTH

Public health is a scientific and technical as well as a social and political endeavour that aims to improve the health and wellbeing of communities or populations. A definition of public health in the Oxford Textbook of Public Health concisely presents its multiple dimensions:“Public health is the process of mobilizing and engaging local, state, national, and international resources to assure the conditions in which people can be healthy. …The actions that should be taken are determined by the nature and magnitude of the problems affecting the health of the community. What can be done will be determined by scientific knowledge and the resources available. What is done will be determined by the social and political situation existing at the particular time and place.”2

EVIDENCE BASED PUBLIC HEALTH

Evidence based public health can be defined as a public health endeavour in which there is an informed, explicit, and judicious use of evidence that has been derived from any of a variety of science and social science research and evaluation methods.3

The definition highlights two aspects of evidence based public health:(1) the use of a particular type of evidence to inform public health decisions; and (2) an emphasis on clear reasoning in the process of appraising and interpreting that evidence.

The types of research that are commonly associated with evidence based public health are described below. (The ‘Critical Appraisal Criteria’ by which the research is judged are addressed later in the glossary.)

Research evidence—Our definition of evidence based public health is sufficiently broad to encompass a wide variety of public health research as a source of evidence. Studies can be categorised according to the questions they seek to answer, and the evidence for evidence based public health could include the following study types4(definitions are adapted from those in a dictionary of epidemiology5 and a dictionary of qualitative inquiry6):

  • Descriptive: to identify the qualities and distributions of variables;

  • Taxonomic: to compare and classify variables into related groups or categories;

  • Analytic: to examine associations between variables—these may be hypothesised causal or therapeutic relations;

  • Interpretive: to identify and explain meanings, usually from a particular perspective;

  • Explanatory: to make observations intelligible and understandable; and

  • Evaluative: to determine quality and worth—often assessing the relevance, effectiveness, and consequences of activities.

Some proponents of evidence based practice adopt greater specificity in the type of research that contributes to evidence based public health. Brownson et al categorise only two types of evidence:type 1 is research that describes risk-disease relations, and identifies the magnitude, severity, and preventability of public health problems. Thus type 1 evidence points to the fact that “something should be done”. Type 2 evidence identifies the relative effectiveness of specific interventions aimed at addressing a problem. Thus type 2 evidence can help to determine that “this should be done”.7,8

For evidence to inform public health policy and practice, we propose a third category that highlights the importance of descriptive and/or qualitative information. Type 3 evidence includes the following: information on the design and implementation of an intervention; the contextual circumstances in which the intervention was implemented; and information on how the intervention was received. Type 3 evidence tells us “how something should be done”.

Each type of evidence may comprise various combinations of study types. Although invaluable to practitioners and policy makers, the third type of evidence is often unavailable from published papers and reports evaluating interventions. An important objective for those engaged in evidence based public health is to improve the quality, availability, and use of all three types of evidence in public health decisions.

EXPERT OPINION

Expert opinion usually refers to the views of professionals who have expertise in a particular form of practice or field of inquiry, such as clinical practice or research methodology. Expert opinion may refer to one person’s views or to the consensus view of a group of experts. When the concept of evidence based practice was first introduced, expert opinion was identified as the least reliable form of evidence on the effectiveness of interventions, and positioned at the lowest level in “levels of evidence” hierarchies.9 Other developments have determined that ranking expert opinion with levels of evidence is not useful or appropriate because expert opinion is qualitatively different to the forms of evidence that are derived from research.10 Opinion can be identified as a means by which research is judged and interpreted rather than as a weaker form of evidence.

LAY KNOWLEDGE

Lay knowledge refers to the understanding that members of the lay public bring to an issue or problem. Lay knowledge encompasses “the meanings that health, illness, disability and risk have for people.”11 Formal identification and examination of lay knowledge is mostly conducted through qualitative forms of inquiry.12 Adequate attention to lay knowledge has been proposed as a criterion for critically appraising qualitative research.13 Concerns that some health professionals may not adequately value lay knowledge have been expressed.14 Lay knowledge can be difficult to access and synthesise, and focus on quantitative forms of evidence can lead decision makers to undervalue the lay knowledge that is derived from narratives and stories.15,16

ARGUMENT AND EVIDENCE

A fundamental principle of evidence based public health is the close linkage between sound argument and evidence. The following terms are relevant to this principle.

Argument refers to a sequence of statements in which the premise purports to give reason to accept the conclusion.17 Hence the premise is the proposition from which the conclusion is drawn.18 In scientific or legal debate “investigating hunches in the light of evidence or defending arguments as rational are two fundamental concerns of critical analysis”.19

Reasoning refers to the process of drawing inferences or conclusions from premises, facts, or other evidence. It is valuable to distinguish between three types of reasoning.

  • Induction refers to reasoning that proceeds from the particular to the general. Thus induction is applied to infer general conclusions or general theory from empirical data, such as particular observations or cases.

  • Deduction refers to reasoning that proceeds from the general to the particular. Thus deduction relies on general theory to infer particular conclusions.

  • Abduction refers to reasoning that makes an inference to the best available explanation; that is, selecting from a number of possibilities the hypothesis that provides the best explanation of the available evidence.6

Logic is the science of “correct” reasoning. If the logic of an argument is concerned with validity, then the key question is whether, if the premises are true, we have a valid reason to accept the conclusion.17

Validity is derived from the Latin word validus, meaning strong. It refers to the degree to which something is well founded, just, or sound. Validity is often used in conjunction with qualifying terms that attribute specific meanings, as follows5:

  • Measurement validity refers to the degree to which a measurement actually measures what it purports to. Measurement validity is classified into three types. Construct validity is the extent to which the measurement corresponds to theoretical concepts or constructs; content validity is the extent to which the measurement incorporates the scope or domain of the phenomenon under study; and criterion validity is the extent to which the phenomenon correlates with an external criterion of that phenomenon. Criterion validity can be concurrent(the measurement and criterion refer to the same point in time) or predictive(the ability of the measurement to predict the criterion).5

  • Study validity refers to the degree to which the inferences drawn from a study are warranted when account is taken of the study methods; the representativeness of the study sample; and the nature of the population from which it is drawn.5

There are two types of study validity. Internal validity is the degree to which the results of a study are correct for the sample of people being studied. External validity(generalisability) is the degree to which the study results hold true for a population beyond the subjects in the study or in other settings.20

Reliability is the degree to which observations or measures can be replicated, when repeated under the same conditions. Reliability is necessary, but not sufficient, to establish the validity of a proposition. Poor reliability can be due to variability in the observer or measurement tool, or instability in the actual phenomenon under study.5

BURDEN OF PROOF

Proof is the evidence that produces belief in the “truth” of a proposition or argument.18 In a dispute, the burden of proof lies with the party responsible for providing evidence of their proposition, or for shifting a conclusion from the default position. For example, under the legal system of many countries an accused person is presumed innocent (default position) until proven guilty. The burden of proof lies with the prosecution. Standard legal questions such as:“Who has the burden of proof?”“What must be proven?” and “By what standard must it be proven?” apply to public health.21 There are often significant differences however, in how these questions are answered.

The burden of proof in public health determines how evidence based practice is interpreted and applied. For example, should strategies be “considered useful until proven ineffective or assumed to be useless until proven effective? We must decide where the burden of proof lies. If the burden of proof rests on demonstrating ineffectiveness, the default is to do everything; if it rests on demonstrating efficacy, the default is to do nothing.”22

FREQUENCY AND RATE

The magnitude and severity of public health problems are often expressed as measures of frequency or proportions and rates.

Prevalence is the proportion of people in a population who have some attribute or condition at a given point in time or during a specified time period.

Incidence(incidence rate) is the number of new events (for example, new cases of a disease) in a defined population, occurring within a specified period of time.

Incidence proportion(cumulative incidence) is the proportion of people who develop a condition within a fixed time period. An incidence proportion is synonymous with risk. For example, the proportion of people who develop a condition during their lifespan represents the lifetime risk of disease.23

CAUSALITY

Causality is “the relating of causes to the effects they produce”.5 Broadly, causality is about production in the sense that a cause is something that produces or creates an effect.24 Causality is fundamental to two aspects of evidence based public health:(1) demonstrating and understanding the causes of public health problems; and (2) establishing the probability and nature of causal relations between an intervention and its effects. Traditional public health research has focused on the former (the magnitude and aetiology of disease), but the literature on evidence based practice has emphasised methods and processes for generating, appraising, and applying intervention research. (See also “Evaluation” and “Critical Appraisal Criteria”).

Various definitions of causality exist, and differing perspectives can result in different conclusions on whether a causal relation has been established. Such differences also lead to different expectations of what constitutes “good” evidence for public health decisions. Some alternative formulations of causality are described below.

Causes are sometimes described as necessary or sufficient causes. A cause is necessary when it must always precede the effect in order for that effect to occur; without the cause, the effect cannot occur. Alternatively, a cause is sufficient when it inevitably produces an effect; if the cause is present the effect must occur. In a relation between a cause and an effect, the cause may be necessary, sufficient, neither, or both.5 Such deterministic and clear cut causal relations are not commonly observed in public health research.

Probabilistic or statistical causality is an alternative to determinism. A probabilistic cause is one that increases or decreases the chance (likelihood) that the effect will occur. A probabilistic statement about a cause and effect provides quantitative information about an estimate of the strength and nature of that relation. It also provides quantitative information on potential effect modification, and about any dose-response relation that may exist between the cause and its effect.23 The application of probabilistic causality is the cornerstone of clinical epidemiology, evidence based medicine, and evidence based public health.25–27

Counterfactual causality describes how the observed effect differs under different sets of conditions. A counterfactual relation can be described in deterministic or probabilistic terms, to show how the outcome (or its probability) differs when the cause is present or absent (while, ideally, all other variables are held constant).23 Counterfactual causality underlies the use of control groups in research.

PUBLIC HEALTH INTERVENTION

An intervention comprises an action or programme that aims to bring about identifiable outcomes. A public health intervention can be defined by the fact that it is applied to many, most, or all members in a community, with the aim of delivering a net benefit to the community or population as well as benefits to individuals. Public health interventions include policies of governments and non-government organisations; laws and regulations; organisational development; community development; education of individuals and communities; engineering and technical developments; service development and delivery; and communication, including social marketing.28

Public health interventions sometimes harm some individuals, and it is important that these harms are identified in any evaluation of the interventions. This allows for informed consideration of whether the harms to individuals are either so small (and/or so rare) that the benefits to others outweigh those harms. For example, population immunisation programmes benefit many people who are protected by the effect of the vaccine, and the whole community benefits if the “herd immunity” becomes so great that the infectious organism finds it difficult to survive. To obtain this benefit however, many people are inconvenienced (for example, by having sore arms for a few hours) and a very few may be harmed by the side effects of the vaccine. In most countries, however, the net benefit of selected immunisation programmes is considered sufficient to warrant population level intervention.

Public health interventions can also be described according to whether the “community” that is the focus of intervention was:(a) the setting for intervention;(b) the target of change;(c) a resource for intervention; or (d) the agent of change.29 McLeroy et al also distinguish between the level of intervention, and target of intervention, in that an intervention may occur at one level but produce change at other levels.29 These distinctions between different types of intervention assist with the specification of public health objectives and can guide evaluation of intervention outcomes. They help target research for public health evidence.

EVALUATION

The Dictionary of Epidemiology defines evaluation as “a process that attempts to determine as systematically and objectively as possible the relevance, effectiveness, and impact of activities in the light of their objectives.”5 Evaluation generates type 2 and type 3 evidence (see “research evidence”) and thus identifies what should or could be done to address a health problem, and how it can be done. The term evaluation is often used interchangeably with evaluative research,30 and intervention evaluations are also referred to as intervention studies. We have observed that the term “evaluation” is sometimes used to refer only to “in-house” quality control studies or internal audits, which, regrettably, do not have the status (or funding support) of research. The tendency to devalue evaluation may explain why there is little type 3 evidence in the published literature, despite the importance of such evidence to decision makers.

The term evaluation does not imply a particular type of a study design. An evaluation could be a randomised controlled trial, an interrupted time series design, or a case study. Hierarchies of study design indicate the degree to which these studies are susceptible to bias.31(See also “levels of evidence”). Although up to 32 different types of evaluation have been identified,32 we include below those most commonly used in public health programme evaluation.

Process evaluation is an assessment of the process of programme delivery. The components of a process evaluation of an intervention may include assessment of the following:recruitment of participants and maintenance of participation (also known as programme reach); the context in which programme is conducted and evaluated;resources required and used;implementation of the programme relative to programme plan;barriers and problems encountered; the magnitude of exposure to materials and activities;initial use or engagement in programme activities at start of programme;continued use of activities over time; and attainment of quality standards.33,34

Formative evaluation refers to the programme planners’ use of data from process evaluation that has been conducted early in the development of an intervention, so that adjustments to the programme can be made if necessary.35

Impact evaluation examines the initial effect of a programme on proximal targets of change, such as policies, behaviours, or attitudes. Thus impact evaluation corresponds to assessment of the initial objectives of the programme.33,34

Outcome evaluation refers to the consequent effect of a programme on the health outcomes in populations, corresponding to the programme goal or target.33,34 Outcome evaluation has also been called summative evaluation, because upon its completion a researcher or policy maker would be in a position to make an overall statement about the worth of a programme.35 Such a statement assumes prior successful completion of process and impact evaluation.

Evaluability assessment is a systematic process to check whether or not a programme is logically theorised, planned, and resourced, and sufficiently well implemented, before the conduct of an impact or outcome evaluation.36 The term “evaluability assessment” was first coined in the early 1980s with the aim of preventing wasteful outcome evaluations; that is, preventing the investment of funds to seek the effects of programmes that were so poorly designed or implemented that one would not expect effects to be present.37

Goal free evaluation is an assessment of all programme effects, whether or not they are part of the intended objectives or goals.38 The programme effects examined in goal free evaluation may be those that initially occur after intervention (corresponding to impact evaluation) and/or subsequent effects (corresponding to outcome evaluation).

Utilisation focused evaluation starts with the evaluator asking decision makers what type of information (evidence) they would find most useful.39 The purpose is to increase the transfer of evidence into practice. Part of this may include scenario setting using hypothetical findings from a proposed study to determine how (or if) decision makers will use the data produced from the research.40 Utilisation focused evaluation can also encompass process, impact, or outcome evaluation; depending on the user’s needs. Note: goal free evaluation can also be utilisation focused—that is, tied to the interests of the intended users of that evaluation.

LOGIC OF EVIDENCE BASED INTERVENTION

The logic of evidence based practice identifies a cyclic relation between evaluation, evidence, practice, and further evaluation. It is based on the premise that evaluations determine whether anticipated intervention effects occur in practice, and identify unanticipated effects. The reports of such evaluations are a valuable source of evidence to maximise the benefits, and reduce the harms, of public health policy and practice. The evidence can also inform evaluation planning, and thus improve the quality and relevance of new research.

The various stages in this cycle tend to be completed by different groups with differing imperatives and priorities. To understand the challenges that may arise in evidence based public health, it is valuable to distinguish the following components.

Evidence reviews

To interpret and use evaluation research, the research must itself be evaluated to determine the degree to which it provides credible (valid and reliable) information, and whether the information is useful (relevant and generalisable) in a different context.41 Hence an evidence review refers to the process of critically appraising evaluation research and summarising the findings, with the purpose of answering a specified review question. In the context of evidence based practice, evidence reviews tend to be technical processes that require a good understanding of research methods and that are guided by standardised criteria and review protocols.42–44(See also “Systematic Reviews” and “Critical Appraisal Criteria”)

Evidence based recommendations

Formulating evidence based recommendations or guidelines draws on reviews of evidence and interprets the findings to make a statement on the implications of the evidence for current policy and practice. This requires substantial input from practitioners, policy makers, and consumers who can integrate the findings in the evidence with the necessary practical and social considerations.45,46

Evidence based guidelines specify the nature and strength of the evidence on which the recommendations are based. In many cases the recommendations are themselves graded; with the grade of recommendation determined by the strength of the evidence.9,47–49 Evidence based recommendations may also be graded with respect to the balance of benefits and harms.50

Consideration of the context in which the recommendations are to be implemented (and the implications of that implementation) inevitably raises questions of interpretation that do not emerge when summaries of evidence are considered in isolation. This can lead to disagreement about recommendations; poor compliance with guidelines even when they are evidence based; or conflicting guidelines on the same topic from different organisations.51–54

Evidence based policy and practice (public health action)

The advocacy and lobbying that are required to influence policies, change practice, and achieve public health action are an important component of public health.55 The process of achieving influence is often more difficult, and requires more complex social and political negotiations, than appraising evidence and formulating recommendations. In public health advocacy, research provides only one type of evidence, and evidence of any type is but one consideration that is taken into account.56 Social, political, and commercial factors often drive or determine the use of evidence in policy settings.57–59 A key feature of evidence based policy and practice is that it is informed by a consideration of the evidence, but the decisions made will depend on prevailing values and priorities.

Evidence based public health action is also often inhibited by a mismatch between the magnitude and importance of a public health problem, and the adequacy of evidence on potential interventions to address the problem. For example, despite the fact that health inequalities and childhood obesity are major, high priority public health problems, evidence is lacking to determine the most effective (or cost effective) policy and practice initiatives to address them.60,61

Linkage and exchange strategies

An ongoing challenge in public health is to close the gap between research and practice.62Linkage and exchange strategies refer to initiatives that seek to promote research utilisation in decision contexts, and encourage research that generates purposeful and useful evidence.63

Disentanglement strategies

If evidence based proposals are given primacy over others, there are real incentives for those with interests in policy and practice directions to influence the creation and use of evidence.57,64 Clear demarcation between those who generate or review evidence and those with political or commercial interests is essential. Disentanglement strategies seek to establish structures and systems that protect independent research and reviews that are free from the influence of vested interests.65,66

SYSTEMATIC REVIEWS

A systematic review is a method of identifying, appraising, and synthesising research evidence. The aim is to evaluate and interpret all available research that is relevant to a particular review question. A systematic review differs from a traditional literature review in that the latter describes and appraises previous work, but does not specify methods by which the reviewed studies were identified, selected, or evaluated. In a systematic review, the scope (for example, the review question and any sub-questions and/or sub-group analyses) is defined in advance, and the methods to be used at each step are specified. The steps include: a comprehensive search to find all relevant studies; the use of criteria to include or exclude studies; and the application of established standards to appraise study quality. A systematic review also makes explicit the methods of extracting and synthesising study findings.31,42,43

A systematic review can be conducted on any type of research; for example, descriptive, analytical (experimental and observational), and qualitative studies.67 The methods of synthesis or summary that are used in a systematic review can be quantitative or narrative/qualitative (see “meta-analysis” and “narrative systematic review”). Systematic reviews are used to answer a wide range of questions, such as questions on: burden of illness, aetiology and risk, prediction and prognosis, diagnostic accuracy, intervention effectiveness and cost effectiveness, and social phenomena.31 Systematic reviews in public health are increasingly used to answer questions about health sector initiatives, as well as other social policies that affect health.68,69

The relevance and value of a systematic review is enhanced if potential users of the review are involved in relevant stages of the process. For example, users can help to ensure that the review question is relevant to policy and practice decisions; that the review considers all relevant measures and outcomes; and that the review findings and recommendations are presented in a format that is easy for the user to follow.70,71

The premise of systematic reviews is that another reviewer using the same methods to address the same review question will identify the same results. Although such repeatability has tended to be more achievable in quantitative reviews and meta-analyses, there are ongoing developments to improve and standardise methods of narrative synthesis.72

Meta-analysis is a specific method of statistical synthesis that is used in some systematic reviews, where the results from several studies are quantitatively combined and summarised.31 The pooled estimate of effect from a meta-analysis is more precise (that is, has narrower confidence intervals) than the findings of each of the individual contributing studies, because of the greater statistical power of the pooled sample.

Narrative review is sometimes used to describe a non-systematic review.73 The term narrative systematic review is used for systematic reviews of heterogeneous studies, where it is more appropriate to describe the range of available evidence than to combine the findings into an overall result.74 A narrative systematic review can be conducted on both quantitative and qualitative research.

Cochrane reviews are systematic reviews carried out under the auspices of the Cochrane Collaboration. Review protocols are peer reviewed and published electronically before reviews being conducted. Cochrane reviews are also peer reviewed for method and content before publication, and there is commitment to update the reviews every two years.42

Publication bias is the bias that can result in a systematic review because studies with statistically significant results are more likely to be published than those that show no effect (particularly for intervention studies). Publication bias can be minimised if an attempt is made to include in a systematic review all relevant published and unpublished studies. This process can be facilitated by international registers of trials.

Heterogeneity is used generically to refer to any type of significant variability between studies contributing to a meta-analysis that renders the data inappropriate for pooling. This may include heterogeneity in diagnostic procedure, intervention strategy, outcome measures, population, study samples, or study methods. The term heterogeneity can also refer to differences in study findings. Statistical tests can be applied to compare study findings to determine whether differences between the findings are statistically significant.23 For example, significant heterogeneity between estimates of effect from intervention studies suggests that the studies are not estimating a single common effect. In the presence of significant heterogeneity, it is more appropriate to describe the variations in study findings than to attempt to combine the findings into one overall estimate of effect.31

CRITICAL APPRAISAL CRITERIA

Critical appraisal criteria are checklists or standards that are used to evaluate research evidence. Critical appraisal criteria can be applied to assess the value of a single study, or they are used to appraise several studies as part of the process of systematic review. Critical appraisal criteria address different variables, depending on the nature and purpose of the research, and the expectations and priorities of the reviewers.

Methodological rigour refers to the robustness and credibility of the methods that are used in a study, and whether the study methods are appropriate to the study question.

An explicit and standardised approach to the critical appraisal of study methods is an important feature of evidence based public health. The aim is to determine whether the research findings are valid or credible as a piece of evidence. Critical appraisal checklists for assessing methodological rigour now exist for almost all types of research questions and study designs.13,31,75

Levels of evidence refer to a hierarchy of study designs that have been grouped according to their susceptibility to bias. The hierarchy indicates which studies should be given most weight in an evaluation where the same question has been examined using different types of study.9,31

Strength of evidence is often assessed on a combination of the study design (level of evidence), study quality (how well it was implemented), and statistical precision (p value and confidence intervals).10

Magnitude refers to the size of the estimate of effect, and the statistical significance and/or importance(clinical or social) of a quantitative finding. Magnitude and statistical significance are numerical calculations, but judgements about the importance of a measured effect are relative to the topic and the decision context.

Completeness considers whether the research evidence provides all the information that is required. For example, when evaluating evidence on public health interventions, reviewers need descriptive information on the intervention strategies that were adopted; the implementation of the intervention and how well it was done; the setting and circumstances in which it was implemented; whom the intervention reached (or did not reach); and how the intervention was received. Reviewers should also seek information on the unanticipated intervention effects, effect modification, and the potential harms of intervention.76

Relevance refers to whether the research is appropriate to the identified review question and whether the study findings are transferable (generalisable) to the population or setting whom the question concerns.

Criteria of causation refer to a set of criteria used to assess the strength of a relation between a cause and an effect. The criteria were first proposed by Bradford Hill to assess whether the relation between an identified risk factor and a disease was one of causation, or merely association.77,78 The refined and widely adopted criteria are as follows5,79,80:

  • Temporality means that the exposure always precedes the effect

  • Strength of the association is defined by the magnitude and statistical significance of the measured risk

  • Dose-response relation means that an increasing level of exposure (amount and/or time of exposure) increases the risk of disease

  • Reversibilty/Experiment means a reduction in exposure is associated with lower rates of disease, and/or the condition can be changed or prevented by an appropriate experimental regimen.

  • Consistency means the results are replicated in studies in different settings or using different methods and thus the measured association is consistent.

  • Biological plausibility means that the relation makes sense according to the prevailing understanding of pathobiological processes.

  • Specificity is established when a single putative cause produces a specific effect.

  • Analogy/Coherence means that the cause and effect relation is already established for a similar exposure or disease and/or the relation coheres with existing theories

ASSUMPTIONS

Assumptions are beliefs or tenets that are taken for granted. They are fundamental to effective communication. In the absence of assumptions, every interaction would need to begin with a detailed exposition of all that is believed or understood by all involved. Assumptions usually remain implicit, and often invisible, until they are questioned or challenged. However, the invisibility of assumptions can be problematic when, for example, collaborators think differently but use the same language or terminology.

Although the purpose of using research evidence is to introduce clarity and greater objectivity to deliberations about policy and practice, all evidence based claims are founded on assumptions. Assumptions shape the questions that are posed, influence the arguments that are made, and determine the evidence that is presented to support arguments. This explains why we may be “resistant to and not persuaded by evidence that relies on divergent or antagonistic assumptions; while the same evidence merely confirms what people wedded to those assumptions already know”.4

One way to uncover assumptions is to generate a range of hypothetical findings from a piece of research, and discuss the implications of these findings before real data are collected.40 This would help in revealing the assumptions or prejudices that both decision makers and researchers bring to their responses to, and interpretation of, particular potential results (see also Utilisation Focused Evaluation)

FRAMING

Problem framing refers to how different people often have different ways of thinking about a problem, and their various perspectives are enmeshed in the way they define, present, and examine that problem.81,82 This can affect how concepts like aetiology, causality, and evidence are discussed, described in writing, and researched. Thus, how a problem is framed determines the research questions that are asked, and the type of evidence that becomes available as a consequence. For example, researchers may privilege genetic explanations of health patterning over environmental explanations; or individual level analyses over group or contextual level analyses.

Frames are often tied to disciplinary perspectives, ideologies, or particular historical or political contexts (see also “Paradigm”). Like assumptions, frames are sometimes implicit rather than explicit. Thus researchers may unconsciously frame their study questions, and report findings in ways that do not make their framing of an issue visible or accountable.16,83

PARADIGM

A paradigm encapsulates the commitments, beliefs, assumptions, values, methods, outlooks, and philosophies of a particular “world view”. The term was popularised by Thomas Kuhn (1922–1996) whose text on The Structure of Scientific Revolutions examined the notion that throughout history, scientific inquiry has been driven by different paradigms; and thus what may be considered “normal science” at one period is subject to change when enough people adopt new ways of looking at the world.84

Some differences of opinion about evidence in public health can be attributed to differences of paradigm. For example, earlier in this glossary we distinguished between reviewing evidence (a technical process that requires a sound understanding of research methods); formulating evidence based recommendations (which requires technical and practical expertise); and achieving public health action (social and political negotiations). Sometimes those who generate or review evidence and those who interpret and use evidence have differing views on fundamental issues such as the nature of inquiry, what reliable knowledge is, and substantiation. That is, they have different perspectives of the following:

  • Ontology— the study of reality or the real nature of what is (also called metaphysics);

  • Epistemology—the study of knowledge and justification; and

  • Methodology—the theory of how inquiry should proceed, or the principles and procedures of a particular field of inquiry.6

Commonly cited paradigms of inquiry include:

  • Positivism—this is now outmoded because it was based on a “naive realism” that assumed all reality was completely independent of the observer, and thus with the right scientific methods it could be measured or apprehended as “objective truth”.

  • Post-positivism—this is the paradigm of many scientific and social-science methods of inquiry (also known as “critical realism”). It incorporates a belief in some independent forms of reality, accepting that they can be only imperfectly (or probabilistically) apprehended, and that understanding of the reality is always subject to change. A majority of the premises and principles of evidence based public health fall within the post-positivist paradigm.

There are also many areas of public health research and action that reflect paradigms that are alternatives to post-positivism, for example, critical theory, constructivism, and participatory paradigms.85 These paradigms give greater emphasis to plural realities, and how these are shaped by social, political, cultural, economic, ethnic, and gender values. They also focus on locally constructed realities, and value subjective interpretations of those realities. Participatory research highlights the importance of inquiry based on collaborative action. Aspects of these paradigms are also reflected in some analyses and critiques of evidence based practice.16,86–88

REFERENCES

View Abstract

Footnotes

  • * The terms in this glossary have been presented in a logical reading order, rather than alphabetically.

Linked Articles