Zum Hauptinhalt springen

A Step Toward Identifying Sources of Medical Errors: Modeling Standards of Care Deviations for Different Disease States.

Fidopiastis, CM ; Venta, KE ; et al.
In: Military medicine, Jg. 183 (2018-03-01), Heft suppl_1, S. 105-110
Online academicJournal

A Step Toward Identifying Sources of Medical Errors: Modeling Standards of Care Deviations for Different Disease States 

Objective To examine the feasibility of utilizing electronic health records (EHR) to determine a metric for identifying physician diagnostic and treatment deviations in standards of care for different disease states. Methods A Boolean-rule-based model compared deviations in standards of care across four disease states: diabetes, cardiovascular disease, asthma, and rheumatoid arthritis. This metric was used to identify the relationship between physician deviations in standards of care procedures, before and after diagnosis, for 76 internal medicine physicians. Results The Boolean-rule-based model identified patterns of standards of care deviation for the physicians before diagnosis and during treatment. The deviations identified for each of the four disease states were then related to Continuing Medical Education courses that could support further training. The rule-based model was extended and improved by including system and process aspects of medical care that are not specifically related to the physician, yet potentially have an impact on the physician's decision to deviate from the standards of care. Conclusion The Boolean-rule-based approach provided a means to systematically mine EHRs and use these data to assess deviations in standards of care that could identify quality of care issues stemming from system processes or the need for specific CME for a physician.

Keywords: medical errors; decision support technologies; medical data analytics; patient safety

INTRODUCTION

The right diagnoses are essential factors in identifying appropriate patient care plans and translating those care plans into successful treatment.[1] Missed, delayed, or incorrect diagnoses can have a severe impact on patient safety and increased medical costs due to the added complexity of care; yet, the Institute of Medicine reports that approximately 12 million people per year are negatively affected by diagnostic error.[2],[3] Although medical errors are an inevitable part of the medical diagnosis and treatment process, they remain understudied and undefined.[4] Worse yet, measures of medical errors are typically based on adverse patient outcomes (i.e. are outcome-dependent – injury and death), which provides limited ability to improve patient safety.[5] Operational definitions for diagnostic error need to include those diagnoses that do not necessarily harm the patient yet lead to suboptimal care, including metrics that identify faulty processes as well as organizational, procedural, or professional patient safety issues.[6] At the process level, identifying when physicians fail to follow accepted practice (e.g. the standards of care [SoC]) may be one means of capturing faulty processes that may lead to medical errors, irrespective of outcome.[7] Health information technology may provide the data necessary to identify such process-level failures. These data could additionally be used to prescribe personalized education plans for health providers that may reduce the number and attenuate the effects of diagnostic errors.[2]

Electronic health records (EHRs) are patient-centered digital records maintained by health care providers that document care over time. Algorithms that evaluate EHRs, through a translation layer that captures deviations from SoC, may uncover diagnostic process errors, thereby speeding up the detection and remediation process.[2] Computer-based decision support systems using EHRs are already successful in addressing part of the patient safety problem, as they have been used to reduce medication errors and expedite the availability of patient information.[8] Extending decision support systems such that they utilize EHR-derived metrics to mitigate the effects of declining diagnostic decision-making processes and skills and associated diagnostic errors could further enhance patient safety.[9] Specifically, developing a methodology to evaluate the feasibility of utilizing EHRs to determine Continuing Medical Education (CME) that improves a physician's diagnostic skill and identifies sources of diagnostic error has been discussed in the literature; however, such an approach has yet to be fully realized.

The main goal of this research is to assess the potential use of EHR-derived metrics to evaluate a physician's diagnostic skill level and provide decision support in CME selection. Additionally, the effort examines the potential to evaluate system or process-based diagnostic errors following a Donabedian's Structure–Process–Outcome (SPO) model,[4] which can support comparison of error trends across medical networks that follow the SPO model. The SPO model accounts for workflow and organizational aspects of health care systems by identifying structure (health care network practices), processes (direct care delivery), and outcomes (patient-centric metrics) that are not necessarily under the Physician's control, yet potentially contribute to the diagnostic error process. The algorithm modeling effort and how it integrates within the SPO are also discussed.

Overview of the Skill-DETECT Model

Skill Deficiencies Evaluation Toolkit for Eliminating Competency-loss Trends (Skill-DETECT) is a pedagogically designed training solution for outpatient clinical specialties that provides decision support for assessing physician competency and identifies CMEs that foster maintenance of diagnostic decision-making.[10],[11] Skill-DETECT accomplishes this through a stepwise application of EHR-derived algorithms that provide four key capabilities: (1) identify when/why deviations from standards of care have already occurred; (2) predict probable onset of diagnostic error and determine, with high predictive ability, when skill or knowledge areas will be likely to become deficient; (3) identify methods and tools that enable physicians to pre-emptively refresh knowledge and maintain familiarity and fluency across expected competencies within internal medicine; and (4) provide real-world, practical training at optimal intervals to maintain cognitive clinical skill proficiency. Figure 1 displays the overall Skill-DETECT model.

Graph: Figure 1. Skill-DETECT systems that utilize electronic health records to derive diagnostic error metrics.

Skill-DETECT provides a means to utilize patient encounter data, apply a translation layer that extracts adherence to or deviations from SoC through a series of automated algorithms that in turn determine physician competency and relate those performance results to relevant CMEs that can remediate deficiencies. With regard to provision of CME, for experienced Physicians a Human Standardized Patient (HSP) approach to medical training, where patient–physician encounters simulate the types of disease complexity and patient characteristics seen in the medical setting, is often considered to be superior to role-playing and other non-interactive forms of presenting patient scenarios.[10],[11] Skill-DETECT has thus been linked to the virtual standardized patient (VSP) medical trainer, which uses virtual reality technology to present an immersive, interactive, context-relevant patient encounter to the physician.[12] In this regard, Skill-DETECT can provide the algorithmic decision support to validly and reliably determine the type of VSP scenario(s) appropriate to addressing identified cognitive skill deficiencies.

The herein reported results illustrate the capability of the SoC Boolean-rule-based model, the first phase of the three-phased model, to identify deviations in standards of care as they relate to different disease states. Specifically, a Boolean-rule-based model was developed that compares published SoC (from national and agency standards, health care research/quality standards, and national frequency statistics) with the quality of care (QoC) provided by a single physician when evaluating patients presenting with one of four disease states: diabetes, cardiovascular disease, asthma, and rheumatoid arthritis. Quality of care was defined as a physician's adherence to SoC for both the diagnosis and the treatment phases of the care delivery process. The deviance is then directly linked to deficiencies within the underlying competencies, pinpointing the Skill-DETECT also assessed risk management from the perspective of SPO-based structure and process issues within a health network.[13],[14] The methodology for selecting and validating the SOC chosen as comparison or a gold standard for this effort is outlined by Venta et al.[15] as are the validation of the mathematical results of the Skill-DETECT model. The results presented in this article support the use of the Boolean-rule-based model to provide input to the logistic regression phase of the model, which predicts individual physician deviations from SoC and aligns with categories of CMEs (Fig. 1).

METHODS

Inclusion/Exclusion Criteria

Explorys medical professional members participate in the Explorys Benchmark Network of health care systems throughout the United States (U.S.) by providing EHRs for their respective medical facilities.[16] The population of de-identified Explorys EHR data represents 360 hospitals, 317,000 providers, and 50 million patients. Although the highest concentration of patient data comes from the Cleveland, Ohio area, EHRs for this effort were representative across the entire U.S.

The sample data for this study consisted of a random draw from the Explorys health care database of de-identified EHRs for patients between the ages of 18 and 65 yr who presented with one or more of four diseases: diabetes, rheumatoid arthritis, cardiovascular disease, or asthma. Inclusion criteria consisted of patients living in the U.S. with 10 or more office visits over at least a 7-yr period to a treating internal medicine practitioner within the Explorys network.

Sample Design and Location Selection

A cluster sampling strategy was employed to control for potential oversampling in the Ohio region and disease state heterogeneity in prevalence based on geographical location. A measure of risk burden across a state and cities within that state as assessed by the U.S. Centers for Disease Control provided the classification for the sample clusters.[17] The risk burden metric accounts for both incidence and prevalence rates of different diseases per region. Table I shows the diabetes risk burden for locations identified by a three-digit zip code within the Explorys EHR data. Cities with a risk burden of over 10 were evaluated as high risk (i.e. Charleston, WV, and Dallas, TX). A score of 9 represented a median-risk burden, whereas scores below 9 represented a lower risk burden. To balance regional differences, EHR data were chosen from locations representing all the three levels of burden risk. A location was included if there were over 5,000 physicians in that region in the Explorys database. Next, a random sample of internal medicine practitioners from each of the five locations represented in Table I provided EHR for the modeling effort. This sampling strategy reduced the computational effort of controlling for regional differences associated with disease rates. Thus, the patient cohort matched to 76 individual internal medicine physicians whose patient encounter data were then used for the exploratory analysis phase of building the EHR Boolean-rule-based model.

U.S. States and Their Associated Burden Rates and Physician Counts Included in the Study.

Location Diabetes Risk Three-Digit Zip Code Number of Physicians Included
Charleston, WV 12.0 250–253 13
Dallas/Fort Worth, TX 10.7 750–753, 760–762 5
Florida 9.4 320–349 20
Washington, DC 9.1 200 9
Western Michigan 9.0 490,491, 493–495 13
Portland, OR 8.0 970–972 16

Key Measure Validation

Before the analysis, SoC were determined by published standards and reviewed by a panel of internal medicine physicians with competence in the diagnosis and treatment of the four disease states. The physicians rated the rules as accurate with 100% concordance.[15] A rule library was then created of these standards that acted as the comparison rules for the model. The Boolean-rule-based model for each disease state was then mathematically validated by evaluating the relationship within the EHR data of 12 million medical records and identifying those SOC rules that had 100% certainty of complying with a binary response distribution (e.g. "yes" the SOC was followed or "no" it was not). The accuracy of the encoding the SOC rules was then tested through a series of Structured Query Language queries, thereby eliminating false-positive and false-negative firings. Finally, a random sample of individual physicians and their respective patient encounters flagged by Boolean algorithm were then investigated by visual inspection and rectified to further resolve any algorithm discrepancies.

Statistical Analysis

Error patterns (i.e. deviations in SoC) before and after diagnosis for 76 physicians were evaluated via a repeated measured t-test to assess the effectiveness of the Boolean-rule-based model in determining diagnostic skill deficiencies and diagnostic errors in general. In addition, a paired sample t-test evaluated significant differences for each physician based between the diagnosis and treatment phases of care. As suggested in the literature, there is a need for a better methodology for studying diagnostic errors.[4] Error patterns for each burden risk location were evaluated descriptively to better understand the SPO model structure and process trends. This retrospective descriptive evaluation captured diagnosis and treatment errors on a regional level with more than one health care system represented.

RESULTS

Demographics

The final patient dataset included EHRs for 5,706 patients. The cohort gender was 59% females and 41% males. As well, cohort patients presented with 39% cardiovascular disease, 44% diabetes (type I and type II), 47% asthma, and 5% rheumatoid arthritis. Of patients who reported race, patients were predominately Caucasian (76%) and African-American (22%), with 10% unknown. Figure 2 shows the gender and race differences per disease state. There was a wide variation in patient age. However, the peak year of birth was approximately 1950 (age 66 yr), which reflects the national demographics for the disease states represented in the study.

Graph: Figure 2. Demographic representation of gender and race by disease state.

Diagnosis and Treatment Error Patterns

Figure 3 shows the error patterns based on median number of rules that flagged as deviations from the representative SoC. Error patterns for the physicians before diagnosis and after identified potential diagnostic error trends for each of the represented disease states were evaluated. Based on paired sample t-tests, physicians made significantly more errors before diagnosing diabetes than after (84% errors before versus 16% after; t[99] = 5.81, p = 0.001). This pattern was also consistent with arthritis, (100% errors before versus 0% after; t[99] = 8.38, p < 0.001). The error pattern was reversed for patients presenting with asthma, with more errors being made after diagnosis (16% errors before versus 84% after; t[99] = −7.11, p < 0.001). In contrast, there were no significant differences in physician errors before and after diagnosing patients with cardiovascular disease (51% errors before versus 49% after; t[99] = 0.13, p = 0.90).

Graph: Figure 3. Boxplot of number of rules that were marked as deviations (i.e. rules fired). (A) Diabetes by location. (B) Arthritis by location. (C) Cardiovascular disease (CVD) by location. (D) Asthma by location. Center bar represents distribution median, box hinges correspond to first and third quartiles, and whiskers represent the highest and lowest values present in the data within 1.5 times the interquartile range.

These differences for disease states are regardless of burden risk. For example, physicians in West Virginia and Texas, high-risk burden locations, had relatively lower deviations in SoC across the four disease states. However, physicians in Oregon, low-risk burden location, made higher numbers of errors diagnosing diabetes as compared with treatment. Physicians in Florida, median-risk burden, had the highest error rate and variability across all disease states.

DISCUSSION

This study evaluated the capability of the Skill-DETECT, an EHR-data-based model, to evaluate deviations from SoC as a first step means in creating a model to evaluate physician-related errors. Figure 2 shows that the sample cohort demographics represented those found for the dominant age group in the U.S. across disease states.[18] Thus, the sample population is representative of the U.S. and provides a means to evaluate QoC error patterns for individual physicians, which is the next stage of the model. These SoC deviation patterns suggest that diabetes (84% error) and arthritis (100% error) are problematic across physicians during the diagnostic phase of care. The results suggest that EHR-derived metrics should capture the complexity between diagnosing and treating a specific disease, as well as account for the heterogeneity among patients, for diagnostic error mitigation to be meaningful and effective.

Treatment errors are considered separable from and more preventable than diagnostic errors; thus, they present less of a threat as a patient safety concern.[19] However, as the results show, 84% errors in delivering care for asthma occur during the treatment phase. Asthma causes roughly 2 million emergency room visits per year with close to 4,000 deaths due to improper treatment.[20] As a less prevalent disease, understanding error patterns and providing training on treatment options may increase patient safety and improve patient outcomes. By separating diagnostic from treatment errors, the model may also be able to more appropriately select CMEs. How the CME are best delivered, HSP, VSP, or traditional, may depend on the type of errors committed and the type of disease state, and further research is suggested.

Diagnostic error patterns observed potentially provide a means to understand the health care delivery process where deviations from SoC are more related to medical practice. Advocates for the use of a holistic SPO model when evaluating physician error argue that patients with common conditions are just as likely to experience harmful errors.[4] As Figure 3 shows, physicians in high-risk burden locations for common diseases such as diabetes do deviate from SoC less. In fact, physicians in Texas and West Virginia committed the least numbers of deviations with the lowest variance whereas Oregon, a low-risk burden location, had the most. One explanation for this error pattern may be the higher level of preventative care practices available in West Virginia and Texas.[17] The Centers for Disease Control reports that in West Virginia 88% and in Texas 75% of diabetes patients see a health professional regularly, whereas there are no data for professional treatment reported for Oregon. Further, risk factors for developing diabetes with complications are higher in Oregon (8.0) and West Virginia (7.9), yet medical facilities in West Virginia provide more education and other patient outreach than those in Oregon. These preventative practices are part of the structure and process aspects of patient care that play an important role in patient safety and quality of care.

There were several limitations of this study. First, EHRs did not provide patient-related data to assess patient compliance or complexity. Accuracy in diagnosing persons 65 yr or older is difficult due to physical comorbidities, which potentially mask serious diseases such as cardiovascular disease.[21] These individual differences need to be controlled in further studies to truly understand the effectiveness of the approach. Second, although Explorys provided a diverse and rich dataset to assess, access to and analysis of the data were difficult. These issues limited the types of analyses performed. Future work necessitates easier access to de-identified secondary patient data, as well as the ability to inform EHR software developers on the types of data needed to study physician errors and other patient safety issues.

CONCLUSION

Electronic-health-record-based models, specifically those with translation layers that impose meaning to the data, have the potential to identify sources of physician error along the medical diagnosis and treatment process. The ability to assess the predominance of diagnostic or treatment errors for a given disease state allows for a more accurate assessment of physician competency, which facilitates CME selection. The Boolean-rule-based approach also provided a means to systematically explore concerns about system and process-related contributions to patient diagnostic error. Although there were limitations in this study, the results demonstrated that diagnosis and treatment should be considered as separable processes when determining patient-related errors. As well, there is a potential to extend the results to include system and process issues that contribute to patient errors beyond the physician, leading to a more holistic approach to improving patient safety and QoC overall.

Acknowledgments

This research and development project was conducted by Design Interactive and was made possible by a contract vehicle that was awarded and administered by the U.S. Army Medical Research & Materiel Command and the Telemedicine & Advanced Technology Research Center, at Fort Detrick, MD, under award number W81XWH-13-1-0311.

Presentation

Presented as an oral presentation at the 2016 Military Health System Research Symposium, Kissimmee, FL, August 2016 (abstract number: MHSRS-16–1332).

Funding

This research was funded by a grant from the U.S. Army Medical Research & Materiel Command and the Telemedicine & Advanced Technology Research Center, at Fort Detrick, MD, under award number W81XWH-13-1-0311.

Author notes

The views, opinions, and/or findings contained in this article are those of the authors and do not necessarily reflect the views of the Department of Defense and should not be construed as an official DoD/Army position, policy, or decision unless so designated by other documentation. No official endorsement should be made.

REFERENCES 1 Holmboe ES, Durning SJ : Assessing clinical reasoning: moving from in vitro to in vivo. Diagnosis 2014 ; 1 (1): 111 – 7. 2 Institute of Medicine : Improving Diagnosis in Health Care. Washington, DC, National Academies of Sciences, Engineering, and Medicine, 2015. Available at http://iom.nationalacademies.org/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx ; accessed January 9, 2017. 3 Singh H, Meyer AN, Thomas EJ : The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf 2014 doi:10.1136/bmjqs-2013-002627. Available at http://qualitysafety.bmj.com/content/early/2014/04/04/bmjqs-2013-002627.abstract ; accessed January 9, 2017. 4 Singh H, Sittig DF : Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015 ; 24 (2): 103 – 10. 5 Grober ED, Bohnen JM : Defining medical error. Can J Surg 2005 ; 48 (1): 39. 6 Zwaan L, Hardeep S : The challenges in defining and measuring diagnostic error. Diagnosis 2015 ; 2 (2): 97 – 103. 7 Hannawa AF : Heuristic thinking: interdisciplinary perspectives on medical error. J Public Health Res 2013 ; 2 (3): e22. 8 Singh H, Thomas EJ, Khan MM, Petersen LA : Identifying diagnostic errors in primary care using an electronic screening algorithm. Arch Int Med 2007 ; 167 (3): 302 – 8. 9 Weaver SJ, Newman-Toker DE, Rosen MA : Reducing cognitive skill decay and diagnostic error: theory‐based practices for continuing education in health care. J Contin Educ Health Prof 2012 ; 32 (4): 269 – 78. Champney RK, Baker EG, Daly T, et al. : An individualized approach to remediating skill decay: framework and applications. Presentation at the Interservice/Industry Training, Simulation & Education Conference (I/ITSEC), Orlando, FL. 2014. Available at http://www.chirpe.com/EventSessions.aspx?DISPLAYMODE=2&SessionID=2612&EventID=2759 ; accessed November 22, 2016. Schlegel C, Woermann U, Shaha M, Rethans JJ, van der Vleuten C : Effects of communication training on real practice performance: a role-play module versus a standardized patient module. J Nurs Educ 2012 ; 51 (1): 16 – 22. Talbot TB, Sagae K, John B, Rizzo AA : Designing useful virtual standardized patient encounters. In: Interservice/Industry Training, Simulation and Education Conference Proceedings. 2012; 3–6. Available at https://pdfs.semanticscholar.org/acd6/3d0e5b128e42a00d62566b7399cd69386583.pdf ; accessed November 22, 2016. Peterson ED, Roe MT, Mulgund J, et al. : Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006 ; 295 (16): 1912 – 20. Cebul RD, Love TE, Jain AK, Hebert CJ : Electronic health records and quality of diabetes care. NEJM 2011 ; 365 (9): 825 – 33. Venta K, Baker E, Fidopiastis C, Stanney K : The value of EHR-based assessment of physician competency: an Investigative effort with internal medicine physicians. Int J Med Inform 2017 ; 108 : 169 – 74. Explorys : Available at https://www.explorys.com/ ; accessed November 26, 2016. Center for Disease Control : U.S. Diabetes Surveillance System. Available at http://gis.cdc.gov/grasp/diabetes/DiabetesAtlas.html ; accessed November 16, 2016. Centers for Disease Control and Prevention : Chronic Disease and Health Promotion Open Data. Available at https://chronicdata.cdc.gov ; accessed November 16, 2016. McDonald KM, Bryce CL, Graber ML : The patient is in: patient involvement strategies for diagnostic error mitigation. BMJ Qual Saf 2013 ; 22 (sup 2): 33 – 9. Asthma and Allergy Foundation of America : Available at http://www.aafa.org/ ; accessed November 22, 2016. Skinner TR, Scott IA, Martin JH : Diagnostic errors in older patients: a systematic review of incidence and potential causes in seven prevalent diseases. Int J Gen Med 2016 ; 9 : 137 – 46.

By Cali M Fidopiastis; Kim E Venta; Erin G Baker and Kay M Stanney

Reported by Author; Author; Author; Author

Titel:
A Step Toward Identifying Sources of Medical Errors: Modeling Standards of Care Deviations for Different Disease States.
Autor/in / Beteiligte Person: Fidopiastis, CM ; Venta, KE ; Baker, EG ; Stanney, KM
Link:
Zeitschrift: Military medicine, Jg. 183 (2018-03-01), Heft suppl_1, S. 105-110
Veröffentlichung: 2018- : Oxford : Oxford University Press ; <i>Original Publication</i>: Washington, D.C. : Association of Military Surgeons, United States, 1955-, 2018
Medientyp: academicJournal
ISSN: 1930-613X (electronic)
DOI: 10.1093/milmed/usx203
Schlagwort:
  • Adult
  • Computer Simulation standards
  • Computer Simulation statistics & numerical data
  • Electronic Health Records instrumentation
  • Female
  • Humans
  • Male
  • Middle Aged
  • Physicians standards
  • Physicians statistics & numerical data
  • Retrospective Studies
  • Electronic Health Records statistics & numerical data
  • Guideline Adherence standards
  • Medical Errors
  • Standard of Care trends
Sonstiges:
  • Nachgewiesen in: MEDLINE
  • Sprachen: English
  • Publication Type: Journal Article; Research Support, U.S. Gov't, Non-P.H.S.
  • Language: English
  • [Mil Med] 2018 Mar 01; Vol. 183 (suppl_1), pp. 105-110.
  • MeSH Terms: Medical Errors* ; Electronic Health Records / *statistics & numerical data ; Guideline Adherence / *standards ; Standard of Care / *trends ; Adult ; Computer Simulation / standards ; Computer Simulation / statistics & numerical data ; Electronic Health Records / instrumentation ; Female ; Humans ; Male ; Middle Aged ; Physicians / standards ; Physicians / statistics & numerical data ; Retrospective Studies
  • Entry Date(s): Date Created: 20180411 Date Completed: 20190401 Latest Revision: 20190401
  • Update Code: 20240513

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -