Journal
IEEE INTELLIGENT SYSTEMS
Volume 37, Issue 4, Pages 18-26Publisher
IEEE COMPUTER SOC
DOI: 10.1109/MIS.2022.3197950
Keywords
-
Funding
- Science and Technology Development Fund, Macau SAR [0050/2020/A1]
- International Partnership Program of The Chinese Academy of Sciences [GJHZ202112]
- National Natural Science Foundation of China [62103411]
- Young Elite Scientists Sponsorship Program of China Association of Science and Technology [YESS20210289]
- China Postdoctoral Science Foundation [2020TQ1057, 2020M682823]
Ask authors/readers for more resources
This article introduces the theoretical framework of scenarios engineering for building trustworthy AI techniques. It proposes six key dimensions, such as intelligence and index, calibration and certification, and verification and validation, to achieve more robust and trusting AI.
Artificial intelligence (AI)'s rapid development has produced a variety of state-of-the-art models and methods that rely on network architectures and features engineering. However, some AI approaches achieve high accurate results only at the expense of interpretability and reliability. These problems may easily lead to bad experiences, lower trust levels, and systematic or even catastrophic risks. This article introduces the theoretical framework of scenarios engineering for building trustworthy AI techniques. We propose six key dimensions, including intelligence and index, calibration and certification, and verification and validation to achieve more robust and trusting AI, and address issues for future research directions and applications along this direction.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available