Article Text

Download PDFPDF

Reciprocal peer review for quality improvement: an ethnographic case study of the Improving Lung Cancer Outcomes Project
  1. Emma-Louise Aveling1,
  2. Graham Martin1,
  3. Senai Jiménez García2,
  4. Lisa Martin3,
  5. Georgia Herbert1,
  6. Natalie Armstrong1,
  7. Mary Dixon-Woods1,
  8. Ian Woolhouse4,5
  1. 1Department of Health Sciences, Social Science Applied to Healthcare Improvement, Research (SAPPHIRE) Group, University of Leicester, Leicester, UK
  2. 2Medtronic, Hospital Solutions. Hertfordshire, UK
  3. 3National Cancer Action Team, London, UK
  4. 4Clinical Standards Department, Royal College of Physicians, London, UK
  5. 5University Hospitals Birmingham NHS Foundation Trust, Queen Elizabeth Hospital Birmingham, Birmingham, UK
  1. Correspondence to Dr Emma Louise Aveling, Social Science Applied to Healthcare Improvement Research (SAPPHIRE) Group, University of Leicester, 22-28 Princess Road West, Leicester LE1 6TP, UK; eea5{at}le.ac.uk

Abstract

Background Peer review offers a promising way of promoting improvement in health systems, but the optimal model is not yet clear. We aimed to describe a specific peer review model—reciprocal peer-to-peer review (RP2PR)—to identify the features that appeared to support optimal functioning.

Methods We conducted an ethnographic study involving observations, interviews and documentary analysis of the Improving Lung Cancer Outcomes Project, which involved 30 paired multidisciplinary lung cancer teams participating in facilitated reciprocal site visits. Analysis was based on the constant comparative method.

Results Fundamental features of the model include multidisciplinary participation, a focus on discussion and observation of teams in action, rather than paperwork; facilitated reflection and discussion on data and observations; support to develop focused improvement plans. Five key features were identified as important in optimising this model: peers and pairing methods; minimising logistic burden; structure of visits; independent facilitation; and credibility of the process. Facilitated RP2PR was generally a positive experience for participants, but implementing improvement plans was challenging and required substantial support. RP2PR appears to be optimised when it is well organised; a safe environment for learning is created; credibility is maximised; implementation and impact are supported.

Discussion RP2PR is seen as credible and legitimate by lung cancer teams and can act as a powerful stimulus to produce focused quality improvement plans and to support implementation. Our findings have identified how RP2PR functioned and may be optimised to provide a constructive, open space for identifying opportunities for improvement and solutions.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Approaches to improving quality involving professional self-regulation have suffered something of a loss of favour in recent years, in part because of well publicised failures.1 The current trend is towards external regulation. However, it has proved difficult to design regulatory systems that encourage authentic improvement, rather than bureaucratic compliance with a narrow set of processes,2 and that avoid the risk of being seen as irrelevant or even harmful by the regulated community.3 Approaches that rely on mobilising professional knowledge and peer norming effects, including those using peer review, are now attracting renewed interest. Professional peer review can take a number of forms, ranging from local clinical audit and reviews conducted by specialist committees within single institutions4 through to large-scale quality-assurance programmes. One especially promising approach is that of reciprocal peer-to-peer review (RP2PR), with paired teams from different organisations undertaking reciprocal visits to provide constructive criticism and feedback on each other's clinical practice.5–7 However, like many improvement interventions, what the RP2PR intervention comprises and how it might be optimised remains elusive.

In drug development, substantial effort is invested in providing an explicit description of the proposed molecule; developing an understanding of the likely mechanisms of action through pharmacokinetic and phase I studies; and enhancing the design and administration of the drug before proceeding to large-scale trials. The first step in developing the evidence base for improvement interventions should similarly involve rich, detailed description of the intervention itself.8 Yet studies of quality and safety interventions rarely provide such accounts, and often omit key information about relevant processes.9 This means that many such interventions remain black boxes which cannot easily be reproduced if they prove successful, nor can the reasons they failed easily be identified if they do not succeed.10 In this paper, we aim to contribute to the evidence base for reciprocal peer review by describing a specific peer review model used in a lung cancer improvement programme and identifying the features that appeared to support optimal functioning of this model.

The programme, known as the Improving Lung Cancer Outcomes Project (ILCOP), ran April 2010–March 2012. It aimed to address variation in lung cancer outcomes across hospitals in England using facilitated RP2PR followed by supported quality improvement. This contrasts with the current national cancer peer review process which assesses compliance with a number of structural and process measures via a single visit from an external team. To evaluate the outcomes of the programme, ILCOP used a controlled before–after design, the results of which will be reported separately. Here, we provide a detailed account of RP2PR in this context and describe how it could best be supported to work, identifying its active ingredients and characterising factors that interfere with or facilitate its optimal functioning.

Methods

All 156 English NHS hospital trusts were invited to participate in ILCOP. Ninety-two accepted. Eighty were judged by the core ILCOP team to have sufficient baseline data from the National Lung Cancer Audit to meet inclusion criteria. Of these, 30 NHS trusts were randomised into an intervention arm that participated in RP2PR, and 50 were allocated to the control arm for purposes of a quantitative evaluation (not reported here).

Ethnographic evaluation

We conducted an ethnographic study11 involving non-participant observation, interviews and documentary analysis. We conducted 6 days of non-participant observation of RP2PR visits involving three pairs (six visits in total). These three pairs were selected using a random number generator. We also observed 17 days of ILCOP programme activities, including training events. Using prompt guides based initially on literature review and discussions within the study team, and refined iteratively as the study proceeded, we conducted 14 semi-structured interviews with the ILCOP core team and 64 semi-structured interviews with members of 23 (of 30) clinical teams, including eight paired teams. Interviews covered views of ILCOP, peer review and other project activities; perceived impact of ILCOP; and suggestions for improving the RP2PR process. Interviews were audio-recorded and transcribed verbatim. Fieldnotes were debriefed within the research team, audio-recorded and transcribed. Relevant project documents were also collected for analysis, including plans, reports and training materials.

Data analysis and ethical approval

This study was approved by the Leicestershire, Northamptonshire and Rutland Research Ethics Committee. Data analysis was based on the constant comparative method.12 Initially, open codes were used to describe each unit of meaning. Through an iterative process of comparison across transcripts and fieldnotes, these codes were organised into thematic categories to provide a framework against which all data were processed by EA, GH and a third coder (see acknowledgements), using QSR NVIVO software. The framework was checked and modified throughout processing to ensure fit between data and codes.

Results

ILCOP assumed the characteristic form of a clinical community13: it comprised a small ‘core team’ and clinical teams from participating sites. The core team was based at the Royal College of Physicians, and included a clinical lead who was a senior lung cancer physician (IW); a project manager (SJ); a project administrator and a quality improvement facilitator (LM). The core team had a key role in designing, managing and coordinating the peer review process. Having recruited the 30 participating sites, it paired teams using four headline indicators from the National Lung Cancer Audit. It then ran learning events attended by the participating teams that explained the aims and format of the reciprocal peer review visits, which would involve each team visiting its paired partner and hosting a visit in return.

Our observations and interviews suggested that six components were fundamental to the model used by ILCOP (box 1). First, the minimum requirements for team attendance were explicitly set so that genuine multidisciplinary representation could be achieved. Three members were required as a minimum: a clinical lead (a physician); a clinical nurse specialist; and the multidisciplinary team coordinator (MDTC). Second, the emphasis of the visits was intended to be on sharing learning across participating teams, helping teams to identify for themselves areas in which they would like to secure service improvement, and generating potential solutions through joint discussion. It was made clear during training events and elsewhere that the visits had no regulatory role, no role in any accreditation processes, and no role in investigation of possible adverse events or defects. Third, the visits were explicitly arranged so that they avoided simply reviewing paperwork. Host teams presented an overview of their service, and subsequently the visiting team was asked to observe directly the regular multidisciplinary team (MDT) meeting of the host team. Fourth, visits were structured to prioritise discussion, during which the host team could reflect on team processes, their performance on the National Lung Cancer Audit and on patient experience surveys, with the visitors providing feedback based on the data and their observations. Fifth, the meetings were facilitated by the independent facilitator from the core team. Finally, once they had hosted a visit, teams were asked to develop one or more improvement plans focused on an area that they wanted to improve, and to commit to implementing these plans over the subsequent 16 months with support from the ILCOP core team and peers.

Box 1

Core elements of the peer review agenda

  • Introduction by the host team on their multidisciplinary team (MDT) set up and the local context

  • Observation of the host team's MDT meeting: the visiting team used a structured form to note comments and suggestions about various aspects of the functioning of the MDT meeting (eg, attendance, access to technology)

  • Three discussion sessions (lasting approximately 30 min to an hour), each focusing on one of the following:

  • The functioning of the MDT meeting, using observations as a stimulus

  • The host team's National Lung Cancer Audit data, provided by the core team along with comparison/target data

  • The results of the host team's patient experience survey, which included quantitative data from closed rating scale questions as well as a list of free text comments

  • Summary and quality improvement plan: the final session aimed to identify the focus of improvement work to be undertaken by the host team. The facilitator also introduced a highly structured template for the quality improvement plan and provided a short introduction to using methods such as plan–do–study–act (PDSA) cycles

In optimising the functioning and operation of this model, we identified five factors that were especially important: peers and pairing methods; provision of logistical support; structure of discussion sessions; the facilitator role; and credibility of the process. All involved substantial challenges.

Peers and pairing methods

The choice of peers and the methods by which they were paired was crucial. Care was taken by the ILCOP core team to match teams with different and complementary strengths and weaknesses, with the intention of avoiding unhelpful dynamics, such as a ‘good’ team feeling it had little to gain from being paired with a ‘bad’ team. This proved largely successful. We didn't want trusts whose results on the National Lung Cancer Audit were perhaps not so good to feel it was a case of someone coming in with fantastic results, and them feeling almost victimised, demoralised, ‘It's them and us’. It's about, ‘Well we can do something well and you can do this well’, so they're both on a fairly even footing. (ILCOP Core Team Member, I-35)

Including a minimum of three core MDT members in the visits was also important because it ensured that participants felt the review was conducted by peers working in similar roles and settings, and it brought together different professional perspectives on the same issue, generating rich discussion. So the huge benefit of ILCOP has been the fact that your peers who have the same problems that you have, have gone to your place and had a look at it so you're going to listen to what they point out and say, rather than some external guy who's never done thoracic surgery who's now coming down to talk down and tell you. (Physician, I-59)

Minimising logistic burden

Arranging reciprocal visits between busy practitioners with multiple commitments within a limited timeframe was a significant challenge. Substantial investment was needed by the core team in administrative tasks, such as room bookings and travel arrangements, and ensuring that data were easily available for presentation. They were bringing us together and saying what days can you do it …, sort of, trying to plan your days when we got there and they were very supportive with trying to, sort of, bring data along, so we didn't have to do all the trawl ourselves and things like that. (Clinical Lead, I-63)

The challenges of organising visits led to considerable delays for some participants, sometimes resulting in a loss of focus, clarity of understanding and time for implementation.

Structure of visits

Also important to the optimal functioning of the model was the creation of safe environments for learning and sharing. It's allowed us to focus on how we're doing things in what feels like quite a safe way—sort of unthreatening, maybe. It's about the time and space to focus on what we're doing and to try and find ways of improving. (Clinical Nurse Specialist, I-02)

Creating a ‘safe space’ required careful management. The prospect of RP2PR could seem daunting or potentially adversarial for participants. The reciprocity of visits—with visiting teams subsequently becoming host teams—had a helpful disciplining effect: that ‘reviewers’ knew they would later be ‘reviewed’ by the same team encouraged visitors to be respectful and constructive in their feedback. Reciprocal visits were important in allowing relationships of trust to develop over time between the teams, which in turn supported more open sharing and learning both formally and informally. Actually we learnt more probably just from the gossip afterwards, on the second occasion. (Clinical Lead, I-01)

Observations suggested that having most of the day together—rather than a short meeting—was important in encouraging openness. Visitors had time to fully understand the host trust's situation, the challenges they faced and how they had got to where they were now. This helped avoid misunderstandings and feelings of judgemental accusations, and generated more locally appropriate solutions while avoiding defensive responses of ‘that won't work here’. It took all day to develop a sufficient understanding of the things that are going on, for people to figure out that this is actually one of the problems. It was right in the last session they began to think about the strategies for how to develop it … I felt that was a big contrast from the beginning to the end of the day. (Fieldnotes)

Allowing host teams to discuss and present their views on their national audit data, patient experience data, or the MDT meeting before their visitors commented also contributed to the creation of a ‘safe’ environment. Another key strategy was structuring discussion to include direct peer-to-peer (eg, nurse-to-nurse) discussion first, then discussion within teams, and then feedback to/from the paired team. Participants appreciated the opportunity to share challenges and working practices with their direct counterpart in the opposite team. It helped to strengthen the ‘voice’ of groups at risk of being marginalised by predominantly clinical discussions, and raised some individuals’ confidence in contributing to subsequent group discussions. I've actually seen my own MDTC sort of, start to take on a bit more responsibility because I think it empowered her a bit to feel that she is a more important part of the team. (Clinical Lead, I-12)

However, some found it more difficult to identify relevant and useful practices to apply in their own role, and some non-clinical participants also felt less able to participate in the clinically focused discussion. Once they start talking about technical stuff, I've glazed a bit and I was interested but it was irrelevant. (MDTC, I-08)

Facilitator role

Independent facilitation was important to ensuring inclusion of all voices, focus on the issues at hand, and good timekeeping, while avoiding protracted or bad-tempered discussions. The facilitator was also valuable in steering conversations towards doing the best with what teams had rather than complaining about deficits, and ensuring that discussions were concerned not only with identifying problems, but also focused on recognising teams’ strengths and drawing on the range of experiences and expertise to generate solutions. I hear the Clinical Lead saying ‘there are a whole host of things I thought were pretty atrocious about the other team’ she was getting into her stride with the criticisms and recommendations … And this is the point at which the facilitator goes over and tries to encourage the team to be gentle and to frame things constructively. (Fieldnotes)

Most teams responded positively to working with an external facilitator and valued her work, but this was a challenging role. Being non-clinical meant that the facilitator was accepted by teams as objective and impartial, but also meant that it was more difficult to challenge participants’ views on clinical grounds, and could on occasion result in some resentment. There was just too much trying to manage us … I found that a bit difficult and I had feedback from other members of the team that found that quite difficult as well. (Physician, I-19)

Some teams were more open and accepting of feedback than others; the more closed teams limited what could have been gained from the process. It is also possible that in their desire to be respectful, some teams held back honest criticism. This highlights the challenges in facilitating a productive process through which teams felt able to expose their service, and themselves, to scrutiny and challenge. I didn't feel that they were being quite as objective about their processes and their MDT as we had on our day. I felt they were being a little bit protective, particularly one member was very much, ‘Oh we're fantastic and we're doing everything fine.’ … I gave up fairly quickly trying to discuss any of it. (Clinical Nurse Specialist, I-13)

Credibility of the review process

The fact that the project was led by a respected institution (the Royal College of Physicians), and built on a previous, well received RCP-led project with chest physicians helped establish initial credibility for ILCOP. Because it's got backing and stamps with lots of, you know, the Royal College of Physicians’ cancer action team, lots of people had officially said this was OK and sponsored it and it looked like a reputable study. (Clinical Lead, I-47)

At NHS trust level, the requirement for approval to participate from chief executives helped legitimise time spent on participation. It was equally important, however, that what was presented for review was perceived as credible. National Lung Cancer Audit data and patient experience data were used in a targeted way to identify weaker areas of service provision and factors in MDT dynamics that seemed to contribute to variations in outcomes. However, due to time lags and incompleteness, some teams felt the data were not an accurate reflection of current service standards and were therefore of limited value. The data was a bit out of date … Our data wasn't so good, but we knew why it wasn't so good and we knew it was a data collection issue, so, it wasn't as useful as it could have been. (Clinical Lead, I-16)

In some cases the number of patient experience questionnaires returned was low, prompting participants to challenge the validity of the results. Some participants remained unpersuaded of the usefulness of such data, or indeed of the existence of a ‘problem’ to be addressed. He was very rude and said, ‘This is all bollocks, blah, blah, blah’… I think I said that ‘I'm not really certain that this is useful, going through everything laboriously, particularly when you've got N=4, I don't know what value that is’. (Physician, I-19)

It was not only data that were presented for review, however. The observation of a live MDT meeting was perceived by participating teams as especially valuable, credible, and resistant to ‘gaming’ in contrast to paper-based peer review. It was also an important source of ideas for the visiting team. The national peer review … was just very much to do with collecting data and nothing useful had come out of it at all … the whole emphasis of ILCOP reciprocal peer review was completely different, on looking at what are we doing and where we want to improve. (Clinical Lead, I-12) When you actually get across and see another team doing their, their normal MDT, I think most of the useful ideas come through that. (Physician, I-60)

Exchange visits and ‘live’ peer review offered a constructively disruptive perspective on team dynamics and service standards which had become normalised and accepted as immutable by teams, even though they were sometimes clearly suboptimal. Equally, the opportunity to discuss such issues with an external party helped teams to feel more able to tackle such sensitive issues.

Ensuring implementation and impact

RP2PR was generally a positive experience for participants, who often reported that it mobilised collective action in relation to quality that would not otherwise have happened. Taking part in an externally driven process was seen as an important ‘push’; it legitimised taking time out from busy schedules, and motivated teams to reflect on where improvements were needed, their successes, and where other teams were experiencing similar challenges. You tend to feel pretty isolated and carry on doing MDTs like you've always done … you just plod in your own, sort of, furrow, so it was really to get an idea of, see how other people's MDTs worked and see if there was anything we could do to improve our service. (Clinical Lead, I-63) You get to see not only what you do badly but also what you do well, and I think it's nice to have that positive feedback sometimes … and just acknowledging that we maybe share the same difficulties, even that can be quite a relief. (Clinical Lead, I-19)

Overall, teams saw RP2PR as an engaging, productive way to identify areas for improvement and generate solutions. For such energy to convert into benefits for patients, it was necessary for action to follow. Improvement plans detailing goals and methods for improvement were developed and submitted by 29 of the 30 teams in ILCOP. However, turning intentions into action was not straightforward. Some teams, by their own admission, had not fully understood what they had signed up to. It was also easy for the core ILCOP team—immersed in project planning for many months—to underestimate how much explanation participants new to the process would need. In some instances, disagreement or lack of communication about who would subsequently be taking the improvement plans forward stalled progress. It seemed very doable and that sort of thing, obviously once we got involved we realised it was more than that, which we didn't really sign up to and that's when things got a bit difficult. (Physician, I-22)

Another challenge was securing cooperation from more peripheral MDT members (eg, where improvement plans required changes to systems in the pathology department), suggesting that RP2PR may work best with the involvement of as broad a range of professionals as possible. Directly involving managers in the RP2PR process (including observation of MDT meetings, which managers would not normally attend) also seemed to be especially useful, for example by persuading managers to provide support to improvement work, offer resources, and help teams to align requests with existing policies, targets and other managerial edicts. Well I think the most important thing was probably getting our manager on board, and I think that really made a huge difference because once she saw a couple of things that really would make a big difference, were very simple to do, and were within her power—you see that's the thing, it's all about who's got the know-how and the power to do these things. (Clinical Lead, I-01)

The ILCOP core team reviewed improvement plans submitted by the teams and provided comments on why and how changes could be made. This was important, as many teams lacked quality improvement experience. It also helped minimise the risk that teams would choose to focus on ‘easy wins’ in order to lessen the burden of improvement work or avoid setting themselves up for failure. Encouraging teams to set themselves appropriately challenging goals and to make progress required careful negotiation, and underlined the importance of building supportive relationships between the core and participating teams. You know, gently coercing us to come out with action plans and things that we should do and timetable and things, so but they weren't in your face but they, sort of, put just enough pressure on you to, sort of, make sure it got done. (Clinical Lead, I-63)

Ensuring follow-through and impacts that would benefit patients was undoubtedly a challenge, however. While there were clear examples of improvements (box 2), there was also some evidence of a lack of clarity around what constituted ‘success’, and that some teams’ definitions changed over time to reflect their actual achievements rather than the original goals of their improvement plans. This tension may have been exacerbated by difficulties the core team experienced getting teams to submit local measurements to drive and refine improvement efforts during the project.

Box 2

Examples of improvements made at different participating sites

  • Improved efficiency and participation in multidisciplinary team (MDT) meetings:

  • For example, by changing room layout, more effective chairing, improving access to key information at the meeting

  • Improvements in data completeness on the National Lung Cancer Audit:

  • For example, by introducing use of live data capture software during MDT meetings

  • Improving patients’ access to clinical nurse specialists:

  • For example, securing funding for an additional Clinical Nurse Specialist (CNS), reorganising clinics

  • Reduction in time from referral to diagnosis:

  • For example, by changing the timing of patients’ diagnostic tests (positron emission tomography scans, computed tomography scans, blood and lung function tests), working with radiology and/or pathology departments to understand and adjust processes for ordering, processing and reporting tests

  • Reduction in waiting time for active treatment (in one case from 12 to 3 days):

  • For example, by introducing an alert system to flag up the detection of more aggressive lung cancers and pre-book oncology clinic appointments

  • Improvements in histological confirmation and active treatment rates:

  • For example, by adopting less invasive methods of obtaining biopsy samples

Discussion

Our ethnographic study has allowed a detailed description of a model of RP2PR and identified the key constituents for such a model to function optimally (summarised in box 3). The RP2PR approach is distinctive in that, rather than relying on inspection of documentation, it involves face-to-face interaction and mutual observations of ‘live’ MDT practices in situ. RP2PR was seen as credible by lung cancer teams, in part because it involved assessment by peers rather than outsiders to professional groups.14 By pairing clinical teams, ILCOP reflected the multidisciplinary nature of modern healthcare and facilitated inter-disciplinary as well as intra-professional exchange. At its best, the process worked to expose current practice to the scrutiny of peers, create a constructive focus on scope for improvement, and generate locally appropriate solutions. Central to this was the creation of a safe space where all participants had a voice and challenges could be openly discussed. For many participants, it meant that the process was more useful than traditional audit or defect-focused peer review: it was less about ‘box ticking’, and more revealing of areas where improvement could be targeted. It thus averted problems of tunnel vision and priority distortion.15

Box 3

Lessons for optimising reciprocal peer-to-peer review (RP2PR)

  • Organising RP2PR—making it happen

  • A dedicated, core team to organise the process is essential

  • Legitimise participation, for example, gain chief executive officer approval

  • Minimise the logistical burden for participating teams and allow sufficient time to arrange visits

  • Creating a safe and productive learning environment

  • Recognise team achievements, not just weaknesses

  • Pair teams with differing strengths, not ‘good’ with ‘bad’

  • Maximise peer influence and peer-to-peer learning through the inclusion of team members from a range of disciplines

  • Reciprocity of visits within pairs is important for promoting constructive attitudes and trusting relationships

  • Plan the structure of visits carefully to support in-depth discussion and equal ‘voice’

  • Use an independent facilitator to maintain solution-oriented focus; consider the pros and cons of a clinical versus non-clinical facilitator

  • Ensuring credibility

  • Include observation of ‘live’ practice, such as the multidisciplinary team meeting

  • Ensure data are perceived as credible

  • Ensuring implementation and impact

  • Make sure participants understand what they are signing up to

  • Identify roles and responsibilities early on—who will do what, when?

  • Involve managers

  • Quality improvement plans should reflect local priorities so that teams take ownership but careful use of ‘top-down’ influence may be needed to avoid under-ambitious ‘easy wins’

  • Getting teams to commit to local measurement is challenging but important

  • Ongoing support from the core team is essential, especially when participants lack quality improvement experience

In a context where the value of greater interdisciplinary communication is recognised but difficult to realise,16 our study has identified several features essential to the optimal functioning of an approach that appears very promising. These include a focus on the need for the careful management of the dynamics of the RP2PR process, particularly to ensure that subordinate team members can participate fully and that the process remains constructive and action oriented. Otherwise, individuals can become frustrated or feel unheard, or the process can be undermined by unproductive confrontation, nihilism (‘none of these solutions will work here’) or fantasy (‘if only we had more money’). Independent facilitation has a crucial role in enabling this, but has to negotiate the cramped channels between neutrality and challenge, and between recognising achievement and rewarding complacency.

Other challenges for RP2PR include that of maintaining commitment over time. Delays in arranging visits can threaten early enthusiasm. A careful balance between providing enough information to make project demands clear without overloading participants is needed to ensure participants understand what they are signing up to. While RP2PR may be valuable in generating ideas and solutions for improvement, the challenges in ensuring intentions become action are significant. Some features of the RP2PR process itself are helpful in this regard. Ensuring that teams set out sufficiently challenging and realistic goals that are likely to benefit patients, without damaging local ownership of, and commitment to, improvement plans, is a key task for the core team.17 ,13 Plans for local measurement and commitment to local, real-time measures (in addition to audit) need to be secured early on. In addition, engaging the broadest participation possible in the peer review process (including managers and more peripheral members) is important.

Empirical demonstration of the extent to which the challenges we have identified (such as ensuring follow through and sufficiently ambitious improvement plans) can be overcome, and of the benefits of RP2PR for patients, awaits the outcomes of the separate quantitative evaluation. Longer-term study is needed to identify any unintended consequences of RP2PR for service quality. Although our study included representatives from 23 of 30 teams, those least engaged with the process are likely to be under-represented. RP2PR consumes considerable resource and further evaluation is required to determine whether the costs are justified by improvements made. However, our findings show what is needed to ensure the optimal functioning of the model if it is to be deployed, and how a balance between external impetus and locally owned solutions may be achieved. The overwhelmingly positive perceptions of participants, and RP2PR's potential to generate improvement work that aligns with professionals’ own sense of what will most benefit their patients, suggests this model might have a valuable role to play alongside other, more established methods.

Acknowledgments

We would like to thank Lisa Hallam for her help in preparing the document, and Peter Pronovost for his constructive comments on a previous draft. Janet Willars conducted 32 of the interviews with MDT members; Elizabeth Shaw coded these 32 interviews.

References

Footnotes

  • Funding This study was funded by The Health Foundation

  • Competing interests IW, SJ and LM were members of the core team funded by the Health Foundation who designed, implemented and oversaw the delivery of the Improving Lung Cancer Outcomes Project.

  • Ethics approval Leicestershire, Northamptonshire & Rutland Research Ethics Committee.

  • Provenance and peer review Not commissioned; externally peer reviewed.