4.6 Article

Optimizing Multimodal Scene Recognition through Mutual Information-Based Feature Selection in Deep Learning Models

Journal

APPLIED SCIENCES-BASEL
Volume 13, Issue 21, Pages -

Publisher

MDPI
DOI: 10.3390/app132111829

Keywords

scene recognition; convolutional neural network (CNN); deep learning; mutual information (MI); multimodel; AID

Ask authors/readers for more resources

This article introduces a novel methodology that combines convolutional neural networks with feature selection techniques based on mutual information for scene recognition. The study aims to address the limitations of conventional methods and improve the precision and dependability of scene classification. The methodology achieves notable levels of accuracy on different datasets, surpassing the performance of other established techniques. This work significantly advances the practical application of computer vision and has important implications for improving scene recognition and interpretation accuracy.
The field of scene recognition, which lies at the crossroads of computer vision and artificial intelligence, has experienced notable progress because of scholarly pursuits. This article introduces a novel methodology for scene recognition by combining convolutional neural networks (CNNs) with feature selection techniques based on mutual information (MI). The main goal of our study is to address the limitations inherent in conventional unimodal methods, with the aim of improving the precision and dependability of scene classification. The focus of our research is around the formulation of a comprehensive approach for scene detection, utilizing multimodal deep learning methodologies implemented on a solitary input image. Our work distinguishes itself by the innovative amalgamation of CNN- and MI-based feature selection. This integration provides distinct advantages and enhanced capabilities when compared to prevailing methodologies. In order to assess the effectiveness of our methodology, we performed tests on two openly accessible datasets, namely, the scene categorization dataset and the AID dataset. The results of these studies exhibited notable levels of precision, with accuracies of 100% and 98.83% achieved for the corresponding datasets. These findings surpass the performance of other established techniques. The primary objective of our end-to-end approach is to reduce complexity and resource requirements, hence creating a robust framework for the task of scene categorization. This work significantly advances the practical application of computer vision in various real-world scenarios, leading to a large improvement in the accuracy of scene recognition and interpretation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available