Making tools that work for us: Improving clinical decision support
Photo Credit: Shutterstock / Khakimullin Aleksandr
Hospitalists encounter clinical decision support (CDS) tools hourly in their daily clinical practice. This support can come in the form of clinical algorithms, abnormal lab result alerts, order sets, medication dosage errors, and more.1 CDS tools are implemented with the goal of increasing evidence-based practice and improving patient outcomes, but evidence from clinical trials on the impact of these tools is mixed, with the greatest improvement seen in process measures rather than outcomes.1, 2 CDS tools are often implemented as part of operational or quality improvement work without rigorous testing or postimplementation evaluation. Yet, like any other tool utilized in clinical practice, CDS warrants evaluation of both its effectiveness on clinical outcomes and any unintended consequences on patients or users.
In this issue of the Journal of Hospital Medicine, Williams et al.3 report their findings from a pragmatic clinical trial comparing CDS to usual care at two different pediatric emergency departments (EDs) using two different electronic health record (EHR) systems. The goal of the CDS was to increase guideline-concordant antibiotic prescribing for community-acquired pneumonia. The author team employed rigorous methods in development and evaluation of CDS tools that can serve as a model for future hospital medicine CDS studies. These methods included understanding clinical workflows, employing an interactive, user-centered design process which takes site-specific sociotechnical differences into account, establishing a priori outcomes, and conducting both intention-to-treat and per-protocol analyses. Nevertheless, the results demonstrated only modest effectiveness of the CDS on improving guideline-concordant antibiotic prescribing, limited to patients discharged from the ED (74.9% in CDS arm vs. 66% usual care), mirroring results of many other CDS trials that demonstrate only small to modest improvements.1, 2 This leads to questions like: “what do we do with tools that are seemingly helpful but not dramatically beneficial in practice?” and “how do we find ways to implement CDS to improve patient outcomes without overburdening providers?”
The authors encountered challenges with CDS implementation and uptake, a theme common to CDS trials. First, their CDS tool was implanted in two EHR systems at two different hospitals which, while enhancing generalizability, necessitated two different tool designs and integration into different clinical workflows. This approach makes it difficult to separate the impact of CDS from the impact of conditions that influenced implementation. Second, in their per-protocol analysis, only 49% of providers viewed the recommendations provided by the CDS tool. While this may appear low, it is consistent with other CDS studies, highlighting barriers to adoption of CDS by providers. Some of these barriers may include alert fatigue, inappropriate alerting, disrupted workflows, and perceived threats to autonomy.2, 4 In a survey of 45 institutions, Dziorny et al.5 found that pediatric intensive care practitioners viewed CDS tools as intrusive with concerns for diminished critical thinking. Williams et al.3 acknowledge these barriers as a potential explanation for the modest impact on improving their outcome of interest.
While many CDS tools are easy to add to EHRs with the assistance of information technology specialists, building a vision for change and implementing a tool that has a significant impact on patient outcomes is complex and requires purposeful planning and expertise. To build and implement effective CDS, hospitalists should adhere to CDS best practice4 and engage colleagues with expertise in user-centered design, clinical informatics, change management, and implementation science. While the focus on clinical outcomes is fundamental, evaluating user-oriented balancing measures (e.g., adoption, ease of use, workflow disruptions, alert fatigue) are vital to increase the success of our CDS tools and improve uptake of evidence-based practice overall. While CDS tools are abundant in our daily work, there remain many barriers to achieving their intended effect. In addition to the rigorous methods utilized by Williams et al.,3 we must understand the systems into which they are implemented and invest in data-driven surveillance to assess their performance. Only then will we realize the potential for CDS to enhance patient care and support the clinical decisions we make as hospitalists every day.