4.7 Article

Automated ICD coding using extreme multi-label long text transformer-based models

Journal

ARTIFICIAL INTELLIGENCE IN MEDICINE
Volume 144, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artmed.2023.102662

Keywords

Transformers; ICD coding; Extreme multi-label long text classification; MIMIC-III; MIMIC-II; Discharge summaries

Ask authors/readers for more resources

Encouraged by the success of pretrained Transformer models, this study explores their use for ICD coding tasks. The study investigates existing models (PLM-ICD and XR-Transformer) and proposes a novel model (XR-LAT) to address challenges in extreme label set and long text classification. The optimized PLM-ICD models outperform previous SOTA models, while XR-LAT models perform competitively.
Encouraged by the success of pretrained Transformer models in many natural language processing tasks, their use for International Classification of Diseases (ICD) coding tasks is now actively being explored. In this study, we investigated two existing Transformer-based models (PLM-ICD and XR-Transformer) and proposed a novel Transformer-based model (XR-LAT), aiming to address the extreme label set and long text classification challenges that are posed by automated ICD coding tasks. The Transformer-based model PLM-ICD, which currently holds the state-of-the-art (SOTA) performance on the ICD coding benchmark datasets MIMIC-III and MIMIC-II, was selected as our baseline model for further optimisation on both datasets. In addition, we extended the capabilities of the leading model in the general extreme multi-label text classification domain, XR-Transformer, to support longer sequences and trained it on both datasets. Moreover, we proposed a novel model, XR-LAT, which was also trained on both datasets. XR-LAT is a recursively trained model chain on a predefined hierarchical code tree with label-wise attention, knowledge transferring and dynamic negative sampling mechanisms. Our optimised PLM-ICD models, which were trained with longer total and chunk sequence lengths, significantly outperformed the current SOTA PLM-ICD models, and achieved the highest micro-F1 scores of 60.8 % and 50.9 % on MIMIC-III and MIMIC-II, respectively. The XR-Transformer model, although SOTA in the general domain, did not perform well across all metrics. The best XR-LAT based models obtained results that were competitive with the current SOTA PLM-ICD models, including improving the macro-AUC by 2.1 % and 5.1 % on MIMIC-III and MIMIC-II, respectively. Our optimised PLM-ICD models are the new SOTA models for automated ICD coding on both datasets, while our novel XR-LAT models perform competitively with the previous SOTA PLM-ICD models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available