Journal
MACHINE LEARNING
Volume 112, Issue 10, Pages 3917-3943Publisher
SPRINGER
DOI: 10.1007/s10994-023-06358-1
Keywords
Relational learning; Inductive logic programming; Failure explanation
Categories
Ask authors/readers for more resources
Scientists form hypotheses and experimentally test them. If a hypothesis fails, they try to explain the failure to eliminate other hypotheses. This study introduces failure explanation techniques for inductive logic programming, where a hypothesis is tested on examples and failure is explained in terms of failing sub-programs. The algorithm based on SLD-tree analysis shows that fine-grained failure analysis reduces hypothesis space exploration and learning times.
Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to explain the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired, we introduce failure explanation techniques for inductive logic programming. Given a hypothesis represented as a logic program, we test it on examples. If a hypothesis fails, we explain the failure in terms of failing sub-programs. In case a positive example fails, we identify failing sub-programs at the granularity of literals. We introduce a failure explanation algorithm based on analysing branches of SLD-trees. We integrate a meta-interpreter based implementation of this algorithm with the test-stage of the Popper ILP system. We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space. Our experimental results show that explaining failures can drastically reduce hypothesis space exploration and learning times.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available