4.7 Article

Squashing activation functions in benchmark tests: Towards a more eXplainable Artificial Intelligence using continuous-valued logic

期刊

KNOWLEDGE-BASED SYSTEMS
卷 218, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2021.106779

关键词

XAI; Neural networks; Squashing function; Continuous logic; Fuzzy logic

向作者/读者索取更多资源

The research explores a new approach to address the interpretability issue of deep neural networks by combining soft continuous logic and multi-criteria decision-making tools to improve transparency by reducing the black-box nature of neural models. Experimental results demonstrate that Squashing functions perform well in neural networks, similar to conventional activation functions, and achieve high performance in solving simple classification tasks.
Over the past few years, deep neural networks have shown excellent results in multiple tasks, however, there is still an increasing need to address the problem of interpretability to improve model transparency, performance, and safety. Logical reasoning is a vital aspect of human intelligence. However, traditional symbolic reasoning methods are mostly based on hard rules, which may only have limited generalization capability. Achieving eXplainable Artificial Intelligence (XAI) by combining neural networks with soft, continuous-valued logic and multi-criteria decision-making tools is one of the most promising ways to approach this problem: by this combination, the black-box nature of neural models can be reduced. The continuous logic-based neural model uses so-called Squashing activation functions, a parametric family of functions that satisfy natural invariance requirements and contain rectified linear units as a particular case. This work demonstrates the first benchmark tests that measure the performance of Squashing functions in neural networks. Three experiments were carried out to examine their usability and a comparison with the most popular activation functions was made for five different network types. The performance was determined by measuring the accuracy, loss, and time per epoch. These experiments and the conducted benchmarks have proven that the use of Squashing functions is possible and similar in performance to conventional activation functions. Moreover, a further experiment was conducted by implementing nilpotent logical gates to demonstrate how simple classification tasks can be solved successfully and with high performance. The results indicate that due to the embedded nilpotent logical operators and the differentiability of the Squashing function, it is possible to solve classification problems, where other commonly used activation functions fail. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据