3.8 Proceedings Paper

Image Transformers with Regional Attention for Classification of Aneurysm Rupture Risk without Explicit Segmentation

Publisher

SPIE-INT SOC OPTICAL ENGINEERING
DOI: 10.1117/12.2610872

Keywords

AAA-Net; Abdominal Aortic Aneurysms; Representation Learning; Computed Tomography Angiography; Image Classification; Neural Embedding; Deep Learning; Image Transformers

Funding

  1. regional annotation of rare clinical disorders

Ask authors/readers for more resources

The study focuses on automating the classification of AAA rupture risk using neural networks, showcasing the accuracy achieved with latent embeddings and U-Net training on CTA images. By training and classifying different types of U-Nets, mixed class 3D ResNets were found to have 90% accuracy in rupture risk stratification.
Occurring in the descending aorta, abdominal aortic aneurysms (AAAs) can result in death due to dissection or rupture. In the United States, similar to 200,000 people are diagnosed with an AAA per year and AAA rupture is the 15th leading cause of deaths in the country. The stratification of AAA rupture risk is time consuming and requires specialized medical expertise. To automate this clinical task, we demonstrate how latent embeddings from a convolutional neural network pre-trained for AAA segmentation can facilitate accurate classification of patient-specific rupture risk, starting with 3D computed tomography angiogram (CTA) images. The CTA dataset, consisting of 16 high rupture risk elective surgery (EC) and low rupture risk 14 surveillance (SC) cases was segmented for AAAs. As part of a 3-fold cross validation study on segmentation, three 3D U-Nets were trained with: majority EC, majority SC, and split EC & SC folds. The test set neural embeddings were extracted from each U-Net's bottleneck layer for 3D ResNet classification (as EC or SC), conducting 3-fold cross validation with folds similar to the aforementioned segmentation study. Further, we investigated the classification accuracy of neural embeddings of sole class and mixed class pretrained U-Nets. By evaluating the sensitivity and specificity values of the classification exercises, we concluded that mixed class 3D ResNets trained with embeddings from sole class trained U-Nets produced 90% accuracy for rupture risk stratification (i.e. identifying EC vs SC cases). We developed AAA-Net, a novel application that leverages U-Nets as transformers to facilitate image-based classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available