Home 5 Clinical Diagnostics Insider 5 AI May Cut ‘Wrong Blood in Tube’ Errors

AI May Cut ‘Wrong Blood in Tube’ Errors

by | Oct 29, 2018 | Clinical Diagnostics Insider, Diagnostic Testing and Emerging Technologies, Emerging Tests-dtet

Machine learning-based multianalyte delta checks may be effective at identifying ‘wrong blood in tube’ (WBIT) errors, according to a proof-of-concept study published in the American Journal of Clinical Pathology. The authors say that these technology-enabled multianalyte delta check will be more effective at identifying errors and improving patient safety than traditional single analyte delta checks. Because laboratories usually manage the analytic phase of testing with internal quality control measures, the pre- and postanalytic phases may be the most vulnerable to errors, experts say. These preanalytic errors can include improper specimen collection and transport, or WBIT mislabeling errors. WBIT errors can negatively impact clinical care diagnostic or treatment decisions are based on test results corresponding to the wrong patient. “Although only a very small proportion of specimens are presumably affected by WBIT errors, WBIT errors in aggregate may not be rare due to high test volume,” , writes coauthor Matthew Rosenbaum, M.D., from Massachusetts General Hospital in Boston. “For example, even if only one in 1,000 (0.1 percent) specimens were impacted by a WBIT error, a hospital testing a million specimens per year might report 1,000 sets of erroneous results every year.” Given the impossibility of eliminating all human error, processes […]

Machine learning-based multianalyte delta checks may be effective at identifying 'wrong blood in tube' (WBIT) errors, according to a proof-of-concept study published in the American Journal of Clinical Pathology. The authors say that these technology-enabled multianalyte delta check will be more effective at identifying errors and improving patient safety than traditional single analyte delta checks.

Because laboratories usually manage the analytic phase of testing with internal quality control measures, the pre- and postanalytic phases may be the most vulnerable to errors, experts say. These preanalytic errors can include improper specimen collection and transport, or WBIT mislabeling errors. WBIT errors can negatively impact clinical care diagnostic or treatment decisions are based on test results corresponding to the wrong patient.

"Although only a very small proportion of specimens are presumably affected by WBIT errors, WBIT errors in aggregate may not be rare due to high test volume," , writes coauthor Matthew Rosenbaum, M.D., from Massachusetts General Hospital in Boston. "For example, even if only one in 1,000 (0.1 percent) specimens were impacted by a WBIT error, a hospital testing a million specimens per year might report 1,000 sets of erroneous results every year."

Given the impossibility of eliminating all human error, processes to reliably detect WBIT errors before result reporting could impactful. So-called delta checks consider the absolute change in a test result for the same patient, but traditionally, they have only evaluated a single analyte. Multivariate machine learning-based model can distinguish physiologic changes from those indicating WBIT errors.

Rosenbaum and his colleague Jason Baron, M.D., simulated WBIT errors within sets of routine inpatient chemistry test results to develop, train, and evaluate five machine learning-based WBIT detection algorithms. Using data extracted from relevant inpatient laboratory tests results stored in the hospital's laboratory information system, the researchers linked and aligned the results from each patient collection to the results from the most recent prior collection for the same patient admission for 11 commonly tested analytes (calcium, magnesium, plasma blood urea nitrogen [BUN], plasma creatinine, plasma glucose, phosphorous, anion gap, plasma chloride, plasma potassium, plasma bicarbonate, and plasma sodium).

The model was trained to identify WBIT errors based upon absolute change in test result, absolute velocity of change, and the actual values of prior test results (not the change between results). The training data consisted of 10,799 patient collections from 2,369 patient admissions, while the test data consisted of 9,839 patient collections from 2,486 patient admissions.

BUN and creatinine were the most powerful individual analytes in identifying WBIT errors, both having area under the curve (AUC) values of 0.84. At a sensitivity of 80% BUN and creatinine, were only 66 percent and 74 percent specific, respectively. Velocity of change was less powerful than absolute difference across all analytes.

The best performing multivariate model, a support vector machine including the absolute change and current values for each analyte as predictors, had an AUC of 0.97 and a specificity of 96 percent at 80 percent sensitivity. However, at 80 percent sensitivity, this best performing multivariate delta check achieved a positive predictive value (PPV) of only 52 percent.

"Delta check models will only be useful in clinical practice if they can achieve a sufficient PPV to avoid 'alarm fatigue'," write the authors. "Because WBIT errors are presumed to be quite infrequent, these differences in accuracy translate into very important differences in PPV."

Takeaway: Machine-learning-based algorithms may be able to improve the performance of delta checks by incorporating multiple common analytes in the hopes of identifying WBIT errors and improving patient safety.

Subscribe to Clinical Diagnostics Insider to view

Start a Free Trial for immediate access to this article