期刊
BRITISH JOURNAL OF ANAESTHESIA
卷 128, 期 2, 页码 343-351出版社
ELSEVIER SCI LTD
DOI: 10.1016/j.bja.2021.09.025
关键词
artificial intelligence; bias; critical care; decision support; mechanical ventilation respiratory failure
资金
- US National Institutes of Health [NIBIB R01 EB017205]
- National Institute for Health Research Invention for Innovation [200681]
- National Institute of Academic Anaesthesia [NIAA19R108]
- Wellcome Trust
- Royal Academy of Engineering
This study analyzed the application of artificial intelligence (AI) in mechanical ventilation and identified limitations such as limited availability of data sets and code, under-reporting of ethnicity and model calibration, and high risk of bias. Potential solutions were proposed to improve confidence in and translation of this promising approach.
Background: Artificial intelligence (AI) has the potential to personalise mechanical ventilation strategies for patients with respiratory failure. However, current methodological deficiencies could limit clinical impact. We identified common limitations and propose potential solutions to facilitate translation of AI to mechanical ventilation of patients. Methods: A systematic review was conducted in MEDLINE, Embase, and PubMed Central to February 2021. Studies investigating the application of AI to patients undergoing mechanical ventilation were included. Algorithm design and adherence to reporting standards were assessed with a rubric combining published guidelines, satisfying the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis [TRIPOD] statement. Risk of bias was assessed by using the Prediction model Risk Of Bias ASsessment Tool (PROBAST), and correspondence with authors to assess data and code availability. Results: Our search identified 1,342 studies, of which 95 were included: 84 had single-centre, retrospective study design, with only one randomised controlled trial. Access to data sets and code was severely limited (unavailable in 85% and 87% of studies, respectively). On request, data and code were made available from 12 and 10 authors, respectively, from a list of 54 studies published in the last 5 yr. Ethnicity was frequently under-reported 18/95 (19%), as was model calibration 17/95 (18%). The risk of bias was high in 89% (85/95) of the studies, especially because of analysis bias. Conclusions: Development of algorithms should involve prospective and external validation, with greater code and data availability to improve confidence in and translation of this promising approach.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据