4.7 Article

Learning from interpretation transition using differentiable logic programming semantics

Journal

MACHINE LEARNING
Volume 111, Issue 1, Pages 123-145

Publisher

SPRINGER
DOI: 10.1007/s10994-021-06058-8

Keywords

Machine learning; Differentiable inductive logic programming; Explainability; Neuro-symbolic method; Learning from interpretation transition

Funding

  1. National Key R&D Program of China [2018YFC1314200, 2018YFB1003904]
  2. National Natural Science Foundation of China [61772035, 61972005, 61932001]
  3. NII international internship program
  4. JSPS KAKENHI [JP17H00763]

Ask authors/readers for more resources

The paper introduces a novel differentiable inductive logic programming system called D-LFIT, with characteristics including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks.
The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. Differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks. In this paper, we propose a novel differentiable inductive logic programming system called differentiable learning from interpretation transition (D-LFIT) for learning logic programs through the proposed embeddings of logic programs, neural networks, optimization algorithms, and an adapted algebraic method to compute the logic program semantics. The proposed model has several characteristics, including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks. The well-known bottom clause positionalization algorithm is incorporated when the proposed system learns from relational datasets. We compare our model with NN-LFIT, which extracts propositional logic rules from retuned connected networks, the highly accurate rule learner RIPPER, the purely symbolic LFIT system LF1T, and CILP++, which integrates neural networks and the propositionalization method to handle first-order logic knowledge. From the experimental results, we conclude that D-LFIT yields comparable accuracy with respect to the baselines when given complete, incomplete, and mislabeled data. Our experimental results indicate that D-LFIT not only learns symbolic logic programs quickly and precisely but also performs robustly when processing mislabeled and incomplete datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available