Advertisement
Scientific Article Clinical Investigation - Other|Articles in Press, 101218

Improving Communication of Peer Review Conference Outcomes: A Practical Experience

Open AccessPublished:March 15, 2023DOI:https://doi.org/10.1016/j.adro.2023.101218

      ABSTRACT

      Purpose

      To describe the design and implementation of a more robust workflow for communicating outcomes from peer-review chart rounds conference. We also provide information regarding cycle times, plan revisions, and other key metrics that we have observed since initial implementation.

      Materials and Methods

      A multidisciplinary team of stakeholders including physicians and developed a revised peer review workflow that addressed key needs to improve upon the prior process. Consensus terminology was developed to reduce ambiguity regarding the priority of peer-review outcomes and to clarify expectations of the treating physician in response to peer-review outcomes. A custom workflow software tool was developed to facilitate both upstream and downstream process from the chart rounds conference. The peer-review outcomes of chart rounds conference and resulting plan changes for the first 18 months of implementation were summarized.

      Results

      In the first 18 months following implementation of the revised processes 2,294 plans were reviewed and feedback priority levels assigned during. Across all cases with feedback, the median time for the treating attending physician to acknowledge conference comments was 1 day and was within 7 calendar days for 89.1% of cases. Conference feedback was acknowledged within 1 day for 74 of 115 (64.3%) of cases with level 2 comments and for 18 of 21 (85.7%) cases with level 3 comments (p=0.054). Contours were modified in 13 of 116 (11%) cases receiving level 2 feedback and 10 of 21 (48%) cases receiving level 3 feedback (p<0.001). The treatment plan was revised in 18 of 116 (16%) cases receiving level 2 feedback and 13 of 21 (61%) of cases receiving level 3 feedback (p<0.001).

      Conclusions

      We successfully implemented a workflow to improve upstream and downstream processes for chart rounds conference. Standardizing how peer-review outcomes were communicated and recording physician responses allows for improved ability to monitor conference activities.

      INTRODUCTION

      Plan quality in radiation oncology is strongly associated with patient outcomes, both in terms of tumor control and toxicity.1-8 Peer-review conferences remain the most common strategy to ensure quality, and are an important quality management step for preventing failure modes in the clinic.9,10 Despite widespread agreement that peer-review is important, evidence that minimally structured chart rounds improves the quality and safety of delivered treatment is lacking. Instead, recent reports have called into question the ability of conventional chart rounds conferences to even detect grossly deficient treatment plans.11
      In 2020 we convened a journal club to review articles concerning radiation oncology peer-review, including a call to action to restructure approaches to peer-review.12 We then reviewed our departmental quality management procedures through a series of conferences, small group meetings, and a formal failure modes and effects analysis. Our current chart conference remained an effective process control, but we identified multiple potential failure modes downstream from the conference where suggestions are communicated to the treating physician. For instance, lack of a closed-loop communication system, confusion by physicians about how to prioritize conference feedback, insufficient downstream workflows, and inability to formally monitor conference activities were identified as immediate opportunities for improvement.
      Throughout the process of revising our peer review workflow we extensively reviewed publications regarding treatment quality in radiation oncology. We observed that most publications about peer review in radiation oncology were from centers with long established quality programs and tended to focus on peer review outcomes rather than how process controls were successfully implemented. At the other end of the spectrum, resources such as ASTRO's Safety is No Accident publication and Accreditation Program for Excellence guidelines provide a general framework for the goals of a quality program, but many specifics are left to the discretion of the reader. Practical reports of implementing more robust peer review processes were relatively scarce and are needed as a resource to clinics looking to improve upon existing quality programs.
      The purpose of this report is to describe our department's initial effort to implement a more robust workflow for communicating outcomes from peer-review. Our department consists of 22 faculty, four photon treatment centers and one single vault proton center with approximately 1,500 patients treated each year. Most faculty treat multiple disease sites, necessitating a general conference where all plans are considered. We describe how processes were developed and implemented with a focus on practical considerations. We further provide information regarding cycle times, plan revisions, and other key metrics that we have observed over the past year. These data may be useful for those clinics looking to implement more robust peer-review workflows and to provide benchmarks of conference activities.

      METHODS AND MATERIALS

      Workflow development: The workflow surrounding our peer review conference prior to December 2020 is presented in Figure 1, where conference feedback to the attending physician was relayed through uncontrolled and non-confirmed methods of communication. A multidisciplinary team of stakeholders including physicians, physicists, and dosimetrists convened to develop a revised peer review workflow that addressed key needs to improve upon the prior workflow: (1) develop consistent terminology to summarize the results of the peer review conference, (2) to close the communication loop by ensuring conference feedback is acknowledged by the treating physician, and (3) ensure that plans receiving high priority comments were routed back to the conference for additional review. A conceptual framework for a revised peer-review conference workflow (Figure 1) was developed to include standardized terminology regarding the severity of feedback, formal acknowledgement by the treating physician, and automated queueing for further review.
      Figure 1
      Figure 1Framework of peer-review workflow prior to 2020 (upper) and revised workflow (lower).
      Consensus terminology to assign a priority level to the comments recorded at the peer-review conference was developed and consisted of 4 priority levels which correspond to specific actions required of the treating physician:
      • Level 1: No comments and no further action required.
      • Level 2: Minor comments requiring acknowledgement by the treating physician. Treatment may continue as planned and revised plans do not require additional peer review.
      • Level 3: Significant comments requiring formal response by the treating physician and/or plan modification to be reviewed again by the peer review conference. Treatment may continue as planned.
      • Level 4: Major safety concerns. Treatment plan unapproved at the time of peer review and treating physician notified.
      We estimated that approximately 80% of cases would be classified as Level 1, 10-15% as Level 2, <5% as Level 3, and <1% as Level 4. Feedback and comments accompanying the priority level are recorded in the electronic workflow software which notifies the treatment physicians of the peer review outcome by automated email. The treating physician was required to acknowledge all comments; a response was optional for Level 2 comments and required for Level 3 or Level 4 comments.

      Software development

      A custom workflow software tool was developed to facilitate treatment planning and peer review workflow using the ProcessMaker (Durham, NC, USA) platform. This tool is available as open source or commercially supported models. Swim lane process maps were generated using Visio (Microsoft, Redmond, WA, USA) and translated to Business Process Model and Notation (Object Management Group, Milford, MA, USA) for integration into the interactive online module used to track the status of unit operations from patient simulation through plan peer-review. Patient courses were assigned to the peer-review module automatically once the attending physician indicated that the treatment plan was approved. The planning dosimetrist then assigned a date for review based on the disease site and treatment start date. Supporting documents were uploaded to the module, including the electronic prescription and results from the dosimetric safety checking tool XCheck (RedIon, Birmingham, AL USA). Representative screenshots of the workflow software user interface are presented as Figure 2.
      Figure 2
      Figure 2Screenshot of ProcessMaker electronic peer-review comments worksheet (top pane) and physician acknowledgement form (bottom pane).

      Peer-review conference

      Treatment plans were reviewed at one of six weekly conferences (2 general conferences, 2 proton conferences, and 2 stereotactic conferences) attended by attending and resident physicians, physicists, and dosimetrists. Minimum standards for review were: (1) review of the electronic prescription, (2) visual assessment the dose distribution at multiple isodose levels, and (3) review of the dosimetric parameters defined by institutional treatment planning guidelines. Detailed review of contours was routine for proton therapy plans and optional for photon plans. Contour review was not routinely performed prior to treatment planning. A priority level was assigned to each plan by an attending physician and recorded in ProcessMaker and conference comments were summarized to accompany any priority level of 2 or higher. If the treating physician was present at the conference and verbally acknowledged minor feedback, then a peer review level 1 outcome was recorded in ProcessMaker.

      Review of conference outcome

      All cases with assigned peer-review priority levels of 2 or higher triggered automatic email notification to the assigned treating physician. Physicians also received daily reminder notifications of any outstanding peer review outcomes. The attending physician was required to acknowledge the comments and indicate whether any change to the contours or treatment plan was made. For priority level 3 or 4 feedback the physician was also required to document a brief response to the conference comments; a response was optional for priority level 2 feedback.

      Data collection and analysis

      The ProcessMaker peer review module was queried for individual course data. For courses with level 2-4 peer review feedback, the date that the attending physician acknowledged or responded to conference feedback was abstracted, as well as whether the associated plans underwent changes to contours or were replanned. Descriptive statistics were used to summarize conference outcomes and between group differences in frequencies was assessed using the χ2 test.

      RESULTS

      Development to incorporate the treatment planning workflow into ProcessMaker was initiated on January 28, 2020 and integration of the peer review process began on September 3, 2020. The integrated peer review software tool went live on December 18, 2020. Between December 18, 2020 and May 31, 2022, 2,294 plans were reviewed and feedback priority levels assigned; 1,952 (85.1%) of plans were associated with photon-based treatments with the remaining 342 (14.9%) plans associated with proton-based treatments. A total of 2,154 (93.9%) plans received level 1 scores, 116 (5.1%) plans received level 2 feedback, 21 (0.9%) plans received level 3 feedback, and 3 (0.1%) of the 2,294 plans reviewed received level 4 scores during peer review and were set to unapproved status in the treatment planning software during the conference. A summary of feedback priority levels by disease site and treatment modality is shown in Table 1.
      Table 1Peer review priority scores stratified by disease site.
      Disease siteNo. of CasesLevel 1Level 2Level 3Level 4
      N (%)N (%)N (%)N (%)
      PhotonBreast337319 (94.7)17 (5.0)1 (0.3)0
      CNS301283 (94.0)15 (5.0)2 (0.7)1 (0.3)
      GI112101 (90.2)10 (8.9)01 (0.9)
      GU161149 (92.5)8 (5.0)4 (2.5)0
      GYN5755 (96.5)2 (3.5)00
      H&N113101 (89.4)11 (9.7)1 (0.9)0
      Met444421 (94.8)17 (3.8)5 (1.1)1 (0.2)
      Peds3636 (100)000
      Thorax199188 (94.5)10 (5.0)1 (0.5)0
      Other192179 (93.2)10 (5.2)3 (1.6)0
      Total1,9521,832 (93.9)100 (5.1)17 (0.9)3 (0.2)
      ProtonBreast4745 (95.7)1 (2.1)1 (2.1)0
      CNS3935 (89.7)4 (10.3)00
      GI66 (100)000
      GU4443 (97.7)1 (2.3)00
      GYN11 (100)000
      H&N133127 (95.5)5 (3.8)1 (0.8)0
      Met88 (100)000
      Peds2623 (88.5)1 (3.8)2 (7.7)0
      Thorax1211 (91.7)1 (8.3)00
      Other2623 (88.5)3 (11.5)00
      Total342322 (94.2)16 (4.7)4 (1.2)0
      CNS: Central nervous system; GI: Gastrointestinal; GU: Genitourinary; GYN: Gynecologic; H&N: Head and neck; Met: Metastatic; Peds: Pediatrics.
      Across all priority levels, the median time for the treating attending physician to acknowledge conference comments was 1 day and was within 7 calendar days for 89.1% of cases. Conference feedback was acknowledged within 1 day for 74 of 115 (64.3%) of cases with level 2 comments and for 18 of 21 (85.7%) cases with level 3 comments (p=0.054). A scatter plot of the number of days for attending physicians to acknowledge peer review comments over time since implementation of the revised workflow is presented as Figure 3.
      Figure 3
      Figure 3Scatter plot of days to acknowledgement of peer review comments plotted over time since program implementation.
      Contours were modified in 13 of 116 (11%) cases receiving level 2 feedback and 10 of 21 (48%) cases receiving level 3 feedback (p<0.001). The treatment plan was revised in 18 of 116 (16%) cases receiving level 2 feedback and 13 of 21 (61%) of cases receiving level 3 feedback (p<0.001).

      DISCUSSION

      The purpose of this report is to provide a practical description of the process of implementing a standardized support workflow to a conventional chart rounds conference and to describe the initial results of cycle times and plan changes. Peer review in radiation oncology is a critically important as a process control and to monitor the quality of radiation therapy treatment plans. The topic of peer review is an active area of discussion among radiation oncologists, particularly regarding strategies to increase effectiveness and consistency as a part of a high reliability organization.13 For more than 10 years the peer review approach at our institution followed the weekly chart conference model, typically with about one-third of attending physicians present to review between 20 and 40 external beam plans in a 1-hour span. In 2020 we began formal review of our quality management process in response to the American Association for Physicists in Medicine Task Group 100 report as well as a growing number of reports criticizing unstructured chart conferences.9,11,14 During the conceptualization and implantation phases of our revised processes we identified key challenges that are likely to impact other groups looking to make similar changes.
      When this peer review initiative commenced, there was no commercially available software designed to facilitate peer review. Others have also reported a need to develop custom electronic whiteboards to track peer review outcomes.14 Capturing peer review outcomes and physician responses electronically allowed us to track cycle times. We observed that the fewer instances of response times longer than 1 week after the initial 6 months following implementation. We were also able to confirm physicians tended to respond quicker to higher level feedback, with a large majority of level 3 comments addressed within 1 day. We were also able to monitor the frequency that physicians modified contours or treatment plans in response to conference feedback where, as expected, higher priority comments were more likely to trigger changes.
      Another benefit of electronic data capture of peer review outcomes is that it enables a program to benchmark its peer review practices. Our peer review conference yielded level 2 comments for 5.1% of plans and level 3% comments for 0.9% of plans which was lower than our initial estimate that 10-15% of plans would prompt level 2 comments and as many as 5% would prompt level 3 comments. Prior to this initiative, we did not have any internal data to benchmark our existing workflow since the outcomes had not been formally recorded. The optimal frequency of plan modifications triggered at peer review depends on many factors. Given the size of our academic practice spanning multiple facilities, there are few similar series reported to draw from for comparison. In similar models of peer review, about 8-30% of plans are returned for modification.14-17 The rate of plan revision greatly depends on department-specific methods to evaluate plan quality and provide feedback. Major contributors to this variation include peer review of contours prior to treatment planning and virtual vs in-person attendance. Both in-person attendance at conferences and peer review of contours prior to treatment planning are associated with higher engagement and higher rates of plan revisions.18 The ideal rate of plan revisions to ensure optimal care is unknown, but monitoring the impact of peer-review is important for internal review of conference activities.
      Effective communication of peer review recommendations is critical to ensure that both content and priority are accurately conferred to the treating physician. We previously observed a wide range of behaviors among physicians after receiving peer review recommendations; some physicians would nearly always modify treatment plans even for minor comments about stylistic differences whereas others would rarely change a plan. Standardizing our terminology regarding the priority of feedback was able to reduce ambiguity and ensure high priority comments prompted further conference review. The priority level system we implemented was a minor modification to the “No Fly” grading system used by Cox and colleagues.14 Priority level 2 was developed to convey minor suggestions or stylistic differences that should be acknowledged but not necessarily prompt a plan revision. We also added a fourth category to indicate plans that were set to unapproved status (i.e. not deliverable) by the conference since our current peer review occurs after plan approval. Prior to the implementation of this scored model, we and others have observed that comments made by senior physicians, or those with assertive personalities, were often perceived as higher priority than comments made by others regardless of content and intent. Using common terminology forces the vocal or senior peer reviewer to clarify the priority of their comments so they are less likely to be over or under interpreted.
      Whether the treating physician received and acted on feedback from their peers was not recorded, which was concerning given that physicians were often not present at the conference. The need to create an electronic data capture tool therefore became apparent early in this process, and our focus began to shift from modifying the peer review conference itself to developing better processes to monitor conference activities and to effectively communicate conference recommendations to the treatment team. In other words, we believed that improving the rigor of peer review would not translate to improved treatment quality unless we first developed more robust supporting processes.
      This early report of our changing peer review workflows is subject to a range of limitations that are important to acknowledge. Perhaps the most important consideration is that this was developed as a quality improvement project rather than a formal research study designed to address a specific hypothesis. We applied this workflow at a single institution using software that is not commercially available which is an important consideration for generalizability. We did not systematically record the dosimetric impact of planning changes that were prompted by conference feedback and recognize this as an important consideration moving forward. Another future direction includes formally assessing variable conference participation and other factors that impact the frequency and type of feedback.
      In summary, we successfully implemented a program to transform our peer review process from an unquantifiable, open-ended communication system nested within a hierarchical academic practice model to a closed-loop, quantifiable system that operates smoothly and provides feedback promptly. We continue to explore avenues to improve our process, including contour review prior to treatment planning, and our ability to assess physician, patient, and plan metrics provides a solid foundation for continued growth and a commitment to safety. We would encourage other practices to embrace a quantitative model of peer review plans that is reasonable to implement while respecting the stresses placed on each department and that incorporates metrics that can be utilized to modify physician behavior as well as provide an opportunity for continual self-improvement of the model.

      Author responsible for statistical analyses

      Andrew McDonald, ammcdonald@uabmc.edu

      Funding

      None

      Data availability

      Research data are stored in an institutional repository and will be shared upon request to the corresponding author.

      REFERENCES

      • 1
        Cui T, Ward MC, Joshi NP, et al. Correlation between plan quality improvements and reduced acute dysphagia and xerostomia in the definitive treatment of oropharyngeal squamous cell carcinoma. Head & neck 2019;41:1096-103.
      • 2
        Giraud P, Racadot S, Vernerey D, et al. Investigation of Relation of Radiation Therapy Quality With Toxicity and Survival in LAP07 Phase 3 Trial for Locally Advanced Pancreatic Carcinoma. International journal of radiation oncology, biology, physics 2021;110:993-1002.
      • 3
        Boustani J, Rivin Del Campo E, Blanc J, et al. Quality Assurance of Dose-Escalated Radiation Therapy in a Randomized Trial for Locally Advanced Oesophageal cancer. International journal of radiation oncology, biology, physics 2019;105:329-37.
      • 4
        Kearvell R, Haworth A, Ebert MA, et al. Quality improvements in prostate radiotherapy: outcomes and impact of comprehensive quality assurance during the TROG 03.04 'RADAR' trial. Journal of medical imaging and radiation oncology 2013;57:247-57.
      • 5
        Tol JP, Dahele M, Gregoire V, Overgaard J, Slotman BJ, Verbakel W. Analysis of EORTC-1219-DAHANCA-29 trial plans demonstrates the potential of knowledge-based planning to provide patient-specific treatment plan quality assurance. Radiotherapy and oncology: journal of the European Society for Therapeutic Radiology and Oncology 2019;130:75-81.
      • 6
        Marcello M, Ebert MA, Haworth A, et al. Association between measures of treatment quality and disease progression in prostate cancer radiotherapy: An exploratory analysis from the TROG 03.04 RADAR trial. Journal of medical imaging and radiation oncology 2018;62:248-55.
      • 7
        Fairchild A, Straube W, Laurie F, Followill D. Does quality of radiation therapy predict outcomes of multicenter cooperative group trials?A literature review. International journal of radiation oncology, biology, physics 2013;87:246-60.
      • 8
        Brade AM, Wenz F, Koppe F, et al. Radiation Therapy Quality Assurance (RTQA) of Concurrent Chemoradiation Therapy for Locally Advanced Non-Small Cell Lung Cancer in the PROCLAIM Phase 3 Trial. International journal of radiation oncology, biology, physics 2018;101:927-34.
      • 9
        Huq MS, Fraass BA, Dunscombe PB, et al. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management. Medical physics 2016;43:4209-62.
      • 10
        Hoopes DJ, Johnstone PA, Chapin PS, et al. Practice patterns for peer review in radiation oncology. Practical radiation oncology 2015;5:32-8.
      • 11
        Talcott WJ, Lincoln H, Kelly JR, et al. A Blinded, Prospective Study of Error Detection During Physician Chart Rounds in Radiation Oncology. Practical radiation oncology 2020;10:312-20.
      • 12
        Chera BS, Potters L, Marks LB. Restructuring Our Approach to Peer Review: A Critical Need to Improve the Quality and Safety of Radiation Therapy. Practical radiation oncology 2020;10:321-3.
      • 13
        Brunskill K, Nguyen TK, Boldt RG, et al. Does Peer Review of Radiation Plans Affect Clinical Care? A Systematic Review of the Literature. International journal of radiation oncology, biology, physics 2017;97:27-34.
      • 14
        Cox BW, Teckie S, Kapur A, Chou H, Potters L. Prospective Peer Review in Radiation Therapy Treatment Planning: Long-Term Results From a Longitudinal Study. Practical radiation oncology 2020;10:e199-e206.
      • 15
        Walburn T, Wang K, Sud S, et al. A Prospective Analysis of Radiation Oncologist Compliance With Early Peer Review Recommendations. International journal of radiation oncology, biology, physics 2019;104:494-500.
      • 16
        Martin-Garcia E, Celada-Álvarez F, Pérez-Calatayud MJ, et al. 100% peer review in radiation oncology: is it feasible? Clinical & translational oncology: official publication of the Federation of Spanish Oncology Societies and of the National Cancer Institute of Mexico 2020;22:2341-9.
      • 17
        Hesse J, Chen L, Yu Y, et al. Peer Review of Head and Neck Cancer Planning Target Volumes in Radiation Oncology. Advances in radiation oncology 2022;7.
      • 18
        Hughes RT, Tye KE, Ververs JD, et al. Virtual Radiation Oncology Peer Review is Associated With Decreased Engagement and Limited Case Discussion: Analysis of a Prospective Database Before and During the COVID-19 Pandemic. International journal of radiation oncology, biology, physics 2022;113:727-31.

      Declaration of Competing Interest

      Drs. Boggs, Fiveash, Cardan, and McDonald receive research funding from Varian Medical Systems unrelated to the current work. Drs. Boggs, Fiveash, and Cardan receive honoraria from Varian Medical Systems unrelated to the current work