4.6 Review

Artificial intelligence tor mechanical ventilation: systematic review of design, reporting standards, and bias

Journal

BRITISH JOURNAL OF ANAESTHESIA
Volume 128, Issue 2, Pages 343-351

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.bja.2021.09.025

Keywords

artificial intelligence; bias; critical care; decision support; mechanical ventilation respiratory failure

Categories

Funding

  1. US National Institutes of Health [NIBIB R01 EB017205]
  2. National Institute for Health Research Invention for Innovation [200681]
  3. National Institute of Academic Anaesthesia [NIAA19R108]
  4. Wellcome Trust
  5. Royal Academy of Engineering

Ask authors/readers for more resources

This study analyzed the application of artificial intelligence (AI) in mechanical ventilation and identified limitations such as limited availability of data sets and code, under-reporting of ethnicity and model calibration, and high risk of bias. Potential solutions were proposed to improve confidence in and translation of this promising approach.
Background: Artificial intelligence (AI) has the potential to personalise mechanical ventilation strategies for patients with respiratory failure. However, current methodological deficiencies could limit clinical impact. We identified common limitations and propose potential solutions to facilitate translation of AI to mechanical ventilation of patients. Methods: A systematic review was conducted in MEDLINE, Embase, and PubMed Central to February 2021. Studies investigating the application of AI to patients undergoing mechanical ventilation were included. Algorithm design and adherence to reporting standards were assessed with a rubric combining published guidelines, satisfying the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis [TRIPOD] statement. Risk of bias was assessed by using the Prediction model Risk Of Bias ASsessment Tool (PROBAST), and correspondence with authors to assess data and code availability. Results: Our search identified 1,342 studies, of which 95 were included: 84 had single-centre, retrospective study design, with only one randomised controlled trial. Access to data sets and code was severely limited (unavailable in 85% and 87% of studies, respectively). On request, data and code were made available from 12 and 10 authors, respectively, from a list of 54 studies published in the last 5 yr. Ethnicity was frequently under-reported 18/95 (19%), as was model calibration 17/95 (18%). The risk of bias was high in 89% (85/95) of the studies, especially because of analysis bias. Conclusions: Development of algorithms should involve prospective and external validation, with greater code and data availability to improve confidence in and translation of this promising approach.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available