4.7 Article

Interpretable neural networks based on continuous-valued logic and multicriteria decision operators

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 199, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2020.105972

Keywords

Explainable artificial intelligence; Continuous logic; Nilpotent logic; Neural network; Adversarial problems

Funding

  1. Ministry for Innovation and Technology, Hungary [TUDFO/471381/2019-ITM]

Ask authors/readers for more resources

Combining neural networks with continuous logic and multicriteria decision-making tools can reduce the black-box nature of neural models. In this study, we show that nilpotent logical systems offer an appropriate mathematical framework for hybridization of continuous nilpotent logic and neural models, helping to improve the interpretability and safety of machine learning. In our concept, perceptrons model soft inequalities; namely membership functions and continuous logical operators. We design the network architecture before training, using continuous logical operators and multicriteria decision tools with given weights working in the hidden layers. Designing the structure appropriately leads to a drastic reduction in the number of parameters to be learned. The theoretical basis offers a straightforward choice of activation functions (the cutting function or its differentiable approximation, the squashing function), and also suggests an explanation to the great success of the rectified linear unit (ReLU). In this study, we focus on the architecture of a hybrid model and introduce the building blocks for future applications in deep neural networks. (C) 2020 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available