4.6 Article

SignExplainer: An Explainable AI-Enabled Framework for Sign Language Recognition With Ensemble Learning

期刊

IEEE ACCESS
卷 11, 期 -, 页码 47410-47419

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3274851

关键词

Deep learning; Artificial intelligence; Computational modeling; Predictive models; Assistive technologies; Computer vision; Gesture recognition; computer vision; explainable AI; SignExplainer; classification; sign language; technological development

向作者/读者索取更多资源

Deep learning has greatly advanced artificial intelligence in various fields such as computer vision, natural language processing, robotics science, and human-computer interaction. To ensure confidence and responsibility, deep learning applications need to explain the decisions and predictions of the model. Explainable AI research provides methods to interpret the outputs of trained neural networks, which is particularly important for computer vision tasks in domains like medical science and defense systems.
Deep learning has significantly aided current advancements in artificial intelligence. Deep learning techniques have significantly outperformed more than typical machine learning approaches, in various fields like Computer Vision, Natural Language Processing (NLP), Robotics Science, and Human-Computer Interaction (HCI). Deep learning models are ineffective in outlining their fundamental mechanism. That's the reason the deep learning model mainly consider as Black-Box. To establish confidence and responsibility, deep learning applications need to explain the model's decision in addition to the prediction of results. The explainable AI (XAI) research has created methods that offer these interpretations for already trained neural networks. It's highly recommended for computer vision tasks relevant to medical science, defense system, and many more. The proposed study is associated with XAI for Sign Language Recognition. The methodology uses an attention-based ensemble learning approach to create a prediction model more accurate. The proposed methodology used ResNet50 with the Self Attention model to design ensemble learning architecture. The proposed ensemble learning approach has achieved remarkable accuracy at 98.20%. In interpreting ensemble learning prediction, the author has proposed SignExplainer to explain the relevancy (in percentage) of predicted results. SignExplainer has illustrated excellent results, compared to other conventional Explainable AI models reported in state of the art.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据