4.5 Article

Multimodal convolutional neural network model with information fusion for intelligent fault diagnosis in rotating machinery

Journal

MEASUREMENT SCIENCE AND TECHNOLOGY
Volume 33, Issue 12, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/1361-6501/ac7eb0

Keywords

convolutional neural network; intelligent fault diagnosis; multimodal information fusion; rotating machinery

Funding

  1. National Natural Science Foundation of China (NSFC) [51905502, 61733016, 41672155]
  2. Hubei Provincial Natural Sciences Foundation Outstanding Youth Fund [2018CFA092]

Ask authors/readers for more resources

This study proposes a multimodal neural network model that combines continuous wavelet transform and symmetrized dot pattern graphs for information fusion, resulting in improved fault diagnosis performance. Experimental results demonstrate that this model outperforms traditional single-modal CNN structures.
Accurate and efficient fault diagnosis in rotating machinery has long been important and challenging, as it strongly affects the system reliability and safety of industrial applications. In recent years, deep-learning-based methods are developing rapidly and have been widely used in different areas. However, most of them are data-driven and focus on the architecture and design of convolutional neural network (CNN) models, while neglecting the representation of information itself. The intrinsic characteristics of the signal can not fully explored. Moreover, rich multidirectional information hidden inside the signal, which is the key to improving the predictive performance of the entire fault diagnosis model, has usually been ignored. In this work, we propose a multimodal neural-network-based model to pursue feature representation more efficiently and effectively and further improve the diagnostic performance. This method innovatively combines continuous wavelet transform and symmetrized dot pattern graphs through the channel information fusion mechanism after the two-dimensional domain modal transformation of the time-domain signal. The integration of one- and two-dimensional convolutions could fully utilize the feature extraction capability of CNN for multimodal signals, thus forming a multimodal CNN architecture under two-level information fusion. The experiment results prove that the designed model can achieve better performance than the traditional single-modal CNN structure.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available