Article Text


Good research conduct
  1. J Grigg
  1. Correspondence to:
    Dr J Grigg
    Senior Lecturer in Paediatric Respiratory Medicine, Division of Child Health, Department of Infection, Immunity and Inflammation, University of Leicester, Leicester LE2 7LX, UK;

Statistics from

A review of clinical research conduct

Good intentions alone are not enough to protect researchers from performing bad clinical research. This review examines areas that are most likely to cause problems, in particular duplicate publication, conflict of interest, authorship, and data storage. It also discusses the way journal editors approach research conduct issues, and how to create a research environment conducive to good research conduct.

The conduct of clinical research is increasingly governed by rules. Some are statutory, and others derive from guidelines drawn up by universities, funding bodies, and editors of medical journals. In the past, the conduct of research was a matter of passive trust between authors and journals, and between “hands on” researchers and their supervisors. In contrast, researchers and research managers must now actively work at maintaining good conduct in research. The UK Medical Research Council has identified the key research virtues as selflessness, integrity, objectivity, accountability, honesty, and leadership:1 qualities that, one hopes, all paediatricians would aspire to. But what relevance has this to day-to-day clinical research, beyond an aspiration to be good? Specifically, where do researchers unwittingly fall foul of acceptable standards of research conduct? This review is aimed at the well intentioned clinical researcher who strives to keep within the rules, and focuses on areas that cause the most problems.


A duplicate publication is one that overlaps substantially with an article published elsewhere, and is usually published by the same author.2 Recently, authors of systematic reviews in anaesthesia found 103 potential duplicate publications (secondary publications) linked to 78 corresponding primary articles (that is, those published first). Specific patterns of duplication were identified.3 First was the straight copy, for example, where the results of a study were re-published in a pharmaceutical sponsored supplement. Second was where the data from a single multicentre trial were disaggregated into smaller subunits. The third pattern represented “data extension”, where a preliminary article was extended by the addition of more data. The final pattern was chaotic—that is, where an article must have come from the same study, but both the study subjects and outcomes appeared different. Some of these practices are acceptable, but only if accompanied by transparency on the part of the authors. Indeed copies of papers may be published in a foreign language, if the English language journal allows. From the viewpoint of journal editors, problems arise when the intentions of authors are opaque. For example, the duplication study found that 65 secondary publications (including 11 of the translation copies) made no reference to the primary publication. There is no doubt that hidden (covert) duplication is bad publication practice. In some circumstances, this practice may even be construed as research misconduct, since these data may distort meta-analyses (different authors and different author order is no protection against duplication3), and may artificially enhance the data validity.

Unwitting duplicate publication may occur when different sets of observations have been obtained from the same group of patients.2 To prevent this, editors must be informed of any links between studies in the covering letters, copies of manuscripts (submitted, in press, or published) should be enclosed on submission, and any linked papers should be included in the references. Publishing linked data separately is a high risk enterprise: not only is there an increased chance of inadvertent covert duplication, but also of “salami” publication—where data are sliced into “smallest publishable units”. Separate publication may be justified, if the researcher considers that the distinct issues have been addressed. However, it is important to ask oneself whether a single publication would present a more unified picture. An opinion from a colleague can provide a useful external viewpoint when this issue arises. Overlap of tables and the style and content of paragraphs may easily occur when writing multiple reviews on the same subject,4,5 and discussing previous reviews with the commissioning editor is essential. However, overlap of data per se does not preclude publication. There are consequences of getting this process wrong—with notices of duplicate publication, “naming and shaming” editorials,6 and replies which appear as excuses (box 1). Thus, to avoid duplicate or overlapping publications, authors should err on the side of caution when disclosing possible overlaps, take care in cross-referencing any overlapping publication (even at proof stage), and be aware of this as an issue.

Box 1: Responses given to an editor on discovery of duplicate publication2

  • “We did not read the instructions”

  • “We wanted to reach a different audience”

  • “We perceive the overlap to be much less than the reviewer or editor thinks”

  • “Yes, we now see that we broke the rules, but this was not our intent”


Striving for scientific objectivity in conducting and writing up of clinical research help to protect readers of papers from developing a misleading impression of the significance of the data. Data stands on its own, but absolute objectivity when interpreting one’s own study is probably never achieved. Well designed studies, especially those whose results are important irrespective of outcome, help researchers to be self-critical about data, and thus more able to recognise and discuss study limitations. At the other end of the spectrum, are researchers who are certain that their own data interpretation is objective7—a state that can lead to the evangelical promotion of false ideas. More subtle loss of objectivity may occur when writing up studies (for example, the failure to highlight study limitations, and quoting only data that support the study outcome), and it is worth remembering that: “nothing is so difficult as not deceiving oneself” (Ludwig Wittgenstein).

Some influences on objectivity, such as desire for knowledge, commitment to understanding disease in children, ambition, and academic reputation, are impossible to measure, but some negative influences are measurable and significant. For example, there is unequivocal evidence that financial ties have an effect on the research process. Stelfox and colleagues8 found a strong association between authors’ published positions on the safety of calcium channel antagonists and their financial relationships with pharmaceutical manufacturers. More recently, Wittingdon and colleagues9 reported that, whereas published studies suggested a favourable risk-benefit profile for selective serotonin reuptake inhibitors, when combined with unpublished data, risks could outweigh benefits for the treatment of depression in children.9 An interpretation of these data is that financial or contractual considerations, inhibited clinicians from demanding publication. But clinical researchers have a duty to publish both positive and negative trial data, and should be aware that: (1) research funded by drug companies is less likely to be published than research funded by other sources;10 and (2) drug industry sponsored studies are more likely to have outcomes favouring the sponsor than differently funded studies, despite meeting acceptable quality criteria.10 Journal editors also have an important role in protecting against this bias towards publication of positive trials. To protect against bias, pharmaceutical companies must allow clinicians to examine the raw data, and not to require consent before submission of a manuscript for publication11—safeguards that should be in place before embarking on a contractual relationship. In the final manuscript, this relationship must be declared as a conflict of interest.11 What can be confusing is that a “financial conflict of interest” exists for the researcher not only when judgement has been overtly affected, but it also when judgement might or might be perceived to be affected.12 Thus what researchers “feel” about their own objectivity is irrelevant when disclosing financial links—no matter how unimportant they appear to be to themselves.

A useful rule for conflict of interest issues is to declare all financial ties to companies that are broadly related to your area of research: (1) that would be regarded as personal income by the tax authorities; (2) are subsidised trips; (3) are research grants or “awards” from industry or sources with an interest in the study that are made to your research account; (4) involve pharmaceutical industry shares; and (5) are related to any other financially relevant facts—such as whether you hold a patent in the area. Not surprisingly, researchers give stereotypical explanations when an undeclared conflict of interest is discovered (box 2). A recent case related to the paper published by Wakefield and colleagues,13 in which a payment from the Legal Aid Board for research in the same area was not declared at the time of submission. An editorial in The Lancet, subsequently stated that “we regret that aspects of funding for parallel and related work ... and the existence of ongoing litigation were not disclosed to the editors. We judge that all this information would have been material to our decision-making about the paper’s suitability, credibility, and validity for publication”.14 In this case the issue was not that the data were necessarily flawed, but that important information needed to place the authors interpretation (especially the speculation on the role of immunisation) in the appropriate context was missing: an interpretation that was subsequently retracted by most of the original authors.15

Box 2: Responses given to an editor on discovery of conflict of interest16

  • “I didn’t think that the policy applied to the type of (financial) relationship that I had”

  • “The amount I received wasn’t significant enough to merit declaring it”

  • “This is an invasion of my privacy”

  • “Your accusation is unjustified—show me the evidence that I was biased”


Collaborative activity is vital for clinical research. Yet the recent trend to quantify research in terms of the number of high impact papers published per unit time, means not only that authorship may bring prestige, but that rewards are directed to institutions.16 Given these pressures, it is not surprising that rules have been developed governing who can and cannot be included as an author. In 1985, the International Committee of Medical Journal Editors recommended that researchers should not be authors on work that they cannot defend publicly.17 More specifically, authorship should include the conception and design (or analysis) and interpretation of data, and drafting the article (or revising it critically for important intellectual content), and final approval of the version to be published.18 Thus authorship is not earned by the clinician who just gives permission for patients to be entered into a study, or by the head of department included for political reasons.

The number of authorship disputes has increased significantly over the past decade, at least in the USA.19 In order to prevent misunderstandings, the principal researcher should know who is to be the first and last author in the final publication before starting the study. The inclusion and order of other individuals can be delayed until the broad outline of the manuscript is established. In general, the first author is usually the hands-on clinician (for example, the research fellow), and the last author is the senior supervisor (who has written the grant application). The hands-on researcher should resist the temptation to offer “gift authorship” in order to curry favour. A recent trend that reduces some of the pressures on the first author from “fellow travellers”—is a published contributor statement detailing each individual’s role.20 This doesn’t stop individuals making false statements, but publication of contributor statements increases the chances of being found out.

Journal editors are often the first to hear about problems with authorship, duplicate publication, and conflicts of interest. In 1997, a self-help group of medical editors was formed (Committee on Publication Ethics, COPE), which by 2003 had expanded its membership to include editors from over 160 journals.21 At COPE meetings, editors present anonymised vignettes of problems, and the Committee advises on a plan of action. Authors are given an opportunity to reply to questions. COPE then provides advice on any further action to the relevant editor (for example, retraction, notice of duplicate publication, referral to the relevant university, hospital, or licensing body).


From May 2004, clinical trials in the UK must comply with the EU principals of “Good Clinical Practice” (GCP). This means that pharmaceutical industry standards of quality assurance, standard operating procedures, audits, and quality control systems are applied to all trials that assess “medicinal” products.22 The regulations cover trial design, ethics committee approval, establishment of adequate resources, review of documents by ethics committees, protocol compliance, consent, progress reports, safety reporting, and audit. A significant effect of this legislation is on trials funded by charities or by local institutions, where in the past, the principle investigator took the decision on how to record, handle, and store data. Now, even pilot studies involving medicines must be recorded so that there is an “audit trail” from the summary data, to data from individual study participants. The GCP legislation will also make it increasingly difficult to justify less rigorous methods of data handling in observational studies involving children. An additional requirement demanded by research councils, universities, and some journals, is that research data should be retained for at least five years (whether current clinical researchers could reconstruct their five year old publications using original data, is debatable). It is therefore the responsibility of all authors of a publication, to ensure that data are stored and formatted in such a way that an independent individual could easily reconstruct the published summary data and statistical analysis. Ideally, both paper and electronic files should be stored, and the electronic files should be re-saved annually onto the most recent storage device, using the latest version of the spreadsheet.

Why spend the time and effort archiving data? First, authors of systematic reviews often ask, several years after publication, for the original data in order to perform a meta-analysis. Second, archiving enables a full reanalysis to be performed if questions arise about the study’s statistical analysis, or interpretation. For example, in 1993, Dockery and colleagues23 published a study on the effects of fossil fuel particles on health. The results were criticised by industry lobbyists, and since the findings had major implications for air quality guidelines, a reanalysis was commissioned in 2000 by the Health Effects Institute (Boston, USA). Its results confirmed the findings of the original study, and in doing this, strengthened its validity.24 In summary, data from studies involving children should be collected to the EU standard, irrespective of whether this is a legal requirement. Responsibility for ensuring that the data are archived lies with all of the authors, and continues for at least five years after publication.


In 2002, Jan Henrick Schön was found to have faked at least 17 published papers after astute readers noticed that a figure on molecular layer switching in Nature also appeared in a publication in Science related to a different device.25 At his peak, Schön was publishing on average a paper every eight days, but after the fraud’s discovery, all of his publications were deemed to be untrustworthy.25 The definition of research fraud is confined to premeditated dishonesty: that is, fabrication (invention of data), falsification (wilful distortion of data), and plagiarism (copying large chunks of data and words without attribution).26 Thus there is a clear distinction between fraud and “suboptimal” research behaviour (see above).25 Financial gain is not always the motivation behind fraud, since there are less overt rewards such as career advancement, ability to attract more research money, and an improved profile in research assessment exercises. Schön’s hubris was his undoing, but fraud occurs in more subtle forms, for example, “pushing” data by cutting out inconvenient data points, or adding factitious points in a direction which the researcher guesses is probably correct.27 Although not proven, there is a strong suspicion that researchers who commit major frauds have probably been getting away with lesser frauds for years.28 Clinical researchers should not therefore feel threatened if collaborators request individual data. On the contrary, a lead author should expect co-authors to check original data underlying summary graphs and tables that are to be published in their name. Schön deleted his original data files, making it impossible to check his scientific claims, but one of his co-authors acknowledged that he “should have done more to confirm the accuracy of the papers”.27 What should one do if you suspect a collaborator of research fraud? Whistle blowing is always difficult,29 and takes civil courage, but it remains the last checkpoint of good research conduct. Organisations such as UK universities and NHS trusts have pathways to express concerns, with major disciplinary consequences if the whistleblower is victimised. However, before embarking an official complaint, it is probably best to discuss things with a trusted independent senior colleague.30


Researchers do not act in isolation, and the ethos of good research practice must be ingrained into all research structures—from the top down.28 To date, there is little hard data on how institutional characteristics influence research integrity,28 and it may well be that that personal interactions are the most important determinant. Indeed, psychologists have recognised that socialisation with peers, review of research projects and outputs by peers, and an active and explicit adversarial system of criticism within local research structures, and at meetings, are essential “debiasing” techniques.7 Mentorship, traditionally done by senior academic paediatricians, plays a key role in ironing out sloppy practice, and can provide a life long role model of good conduct for young researchers: as long as the supervision is of high quality. Important components of mentorship are: (1) being available for formal scheduled meetings and informal discussions; (2) acting as an advocate; (3) insisting on completion of projects and checking data; (4) assisting with networking; and (5) seeking extramural funding.31 Whether the recent trend to merge some departments of paediatrics into more amorphous “scientific” entities (in order to improve research assessment exercise scores) will reproduce the critical mass of peers that will nurture and inspire the best quality of paediatric research practice, remains to be seen.

In the UK, partnerships between the NHS and universities have a key role in ensuing good research conduct in established researchers and teaching doctors in training. The size and power of these partnerships mean that they can be both responsive to local needs and innovative. For example, in August 2004, the Leicestershire and Northamptonshire and Rutland Denary funded two pilot posts in academic medicine for the second year of the Foundation Programme (F2) for newly qualified doctors. Doctors in these posts spend half of their out of call time in an academic medicine programme, which includes supervised training literature review and critical appraisal, good research practice, research study design, data interpretation, and presentation (Professor B Williams, Dr A Stanley, personal communication). Another option would be to second SpRs on a rotating basis to the “hands on” running of independent clinical trials with an “added on” structured training. Engagement of institutions with senior researchers should be an active process: a list of rules and aspirations published on the local intranet is not enough. When evaluating how seriously employers regard this issue, researchers should ask whether there is a central computer archive to lodge data, whether there is a critical mass of paediatric clinical researchers to provide mutual support and constructive criticism, and whether the institution has compulsory training in research conduct.

A variety of UK bodies provide guidance on good research practice (table 1). However, there is no central agency dedicated to review cases of misconduct—and to ensure that research employers actively promote this concept. To date, the UK General Medical Council, with its multiple roles, reacts to only the most serious cases of research misconduct. In a recent case, a researcher was suspended when he failed to: maintain complete and accurate records and retain them for audit, record research results accurately, keep the records secure, and consult with all the other authors when submitting the work for publication (, accessed December 2004). In contrast, Nordic countries have national committees dedicated to handling scientific dishonesty which have proved to be highly efficient and well regarded—something that the UK could use as a model in the future.32

Table 1

 Websites with information on research best practice, accessed December 2004


To the paediatrician setting out on a research career it may seem that bureaucracy and rules exist to get in the way of, rather than to enable, clinical research. However, the presence of these rules and their complexity exist because clinical researchers have a unique responsibility to both science and patients. In commenting on scientific misconduct, Richard Horton (Editor, The Lancet) wrote “the chain of trust that links the patient to doctor, and doctor to researcher, is fragile. Research evidence strengthens this chain whereas fraud weakens it”:32 a statement which emphasises that research, when done well, should be a positive experience for all participants. Paediatric clinical research is certainly difficult to perform, and may indeed be currently undervalued. But its potential to impact directly on the care of children, in itself, provides sufficient reward if performed to the highest possible standard.

A review of clinical research conduct


View Abstract


  • Competing interests: The author has received financial support to attend conferences from Astra, 3M, Merck (UK), Glaxo-Wellcome, and Allen and Hanburys (UK). He has received payment for lectures given at educational meetings from Astra, Merck (UK), and Glaxo-Wellcome. He has been a co-investigator on a asthma genetics study funded by Glaxo-Wellcome, and has received an unrestricted research grant from Merck (UK).

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles

  • Atoms
    Howard Bauchner