4.7 Article

Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine

期刊

IEEE COMMUNICATIONS MAGAZINE
卷 58, 期 6, 页码 39-45

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MCOM.001.2000050

关键词

Mathematical model; Computational modeling; Wireless communication; Optimization; Analytical models; Deep learning

资金

  1. EC H2020 grant [778305]
  2. Alan Turing Institute under EPSRC grant [EP/N510129/1]

向作者/读者索取更多资源

As 5G mobile networks are bringing about global societal benefits, the design phase for 6G has started. Evolved 5G and 6G will need sophisticated AI to automate information delivery simultaneously for mass autonomy, human machine interfacing, and targeted healthcare. Trust will become increasingly critical for 6G as it manages a wide range of mission-critical services. As we migrate from traditional mathematical model-dependent optimization to data-dependent deep learning, the insight and trust we have in our optimization modules decrease. This loss of model explainability means we are vulnerable to malicious data, poor neural network design, and the loss of trust from stakeholders and the general public -- all with a range of legal implications. In this review, we outline the core methods of explainable artificial intelligence (XAI) in a wireless network setting, including public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, and XAI algorithms. Our review is grounded in case studies for both wireless PHY and MAC layer optimization and provide the community with an important research area to embark upon.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据