4.6 Article

A More Robust Model to Answer Noisy Questions in KBQA

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 22756-22766

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3252608

Keywords

Noise measurement; Data models; Robustness; Error correction; Knowledge based systems; Predictive models; Question answering (information retrieval); Incremental learning; knowledge base question answering; machine learning; natural language processing; relation prediction; robustness

Ask authors/readers for more resources

This paper evaluates and enhances the robustness of a Knowledge Based Question Answering (KBQA) model to various noisy questions by generating 29 datasets of different noisy questions. The proposed model, which incorporates incremental learning and Mask Language Model (MLM) in the question answering process, achieves higher accuracies and is less affected by different kinds of noise in questions. Experimental results show that the model achieves an average accuracy of 78.1% and outperforms the baseline BERT-based model by 5.0% with similar training cost. Furthermore, the model is compatible with other pre-trained models such as ALBERT and ELECTRA.
In practical applications, the raw input to a Knowledge Based Question Answering (KBQA) system may vary in forms, expressions, sources, etc. As a result, the actual input to the system may contain various errors caused by various noise in raw data and processes of transmission, transformation, translation, etc. As a result, it is significant to evaluate and enhance the robustness of a KBQA model to various noisy questions. In this paper, we generate 29 datasets of various noisy questions based on the original SimpleQuestions dataset to evaluate and enhance the robustness of a KBQA model, and propose a model which is more robust to various noisy questions. Compared with traditional methods, the main contribution in this paper is that we propose a method of generating datasets of different noisy questions to evaluate the robustness of a KBQA model, and propose a KBQA model which contains incremental learning and Mask Language Model (MLM) in the question answering process, so that our model is less affected by different kinds of noise in questions and achieves higher accuracies on datasets of different noisy questions, which shows its robustness. Experimental results show that our model achieves an average accuracy of 78.1% on these datasets and outperforms the baseline BERT-based model by an average margin of 5.0% with the similar training cost. In addition, further experiments show that our model is compatible with other pre-trained models such as ALBERT and ELECTRA.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available