4.7 Article

Improve the Deep Learning Models in Forestry Based on Explanations and Expertise

Journal

FRONTIERS IN PLANT SCIENCE
Volume 13, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fpls.2022.902105

Keywords

explainable artificial intelligence; forest care; deep neural networks; feature unlearning; classification

Categories

Ask authors/readers for more resources

In forestry studies, this research aims to improve the interpretability of deep learning models and enhance their performance through guided training using expertise. The experiments demonstrate improved models based on explanation assessment and the automatic generation of expertise in the form of annotation matrix. The study emphasizes the importance of model interpretation and improvement based on expertise in deep learning research.
In forestry studies, deep learning models have achieved excellent performance in many application scenarios (e.g., detecting forest damage). However, the unclear model decisions (i.e., black-box) undermine the credibility of the results and hinder their practicality. This study intends to obtain explanations of such models through the use of explainable artificial intelligence methods, and then use feature unlearning methods to improve their performance, which is the first such attempt in the field of forestry. Results of three experiments show that the model training can be guided by expertise to gain specific knowledge, which is reflected by explanations. For all three experiments based on synthetic and real leaf images, the improvement of models is quantified in the classification accuracy (up to 4.6%) and three indicators of explanation assessment (i.e., root-mean-square error, cosine similarity, and the proportion of important pixels). Besides, the introduced expertise in annotation matrix form was automatically created in all experiments. This study emphasizes that studies of deep learning in forestry should not only pursue model performance (e.g., higher classification accuracy) but also focus on the explanations and try to improve models according to the expertise.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available