4.7 Article

Multi-view features fusion for birdsong classification

Journal

ECOLOGICAL INFORMATICS
Volume 72, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.ecoinf.2022.101893

Keywords

Birdsong recognition; Deep features; Handcrafted features; mRMR; Feature selection

Categories

Funding

  1. Yunnan Provincial Science and Technology Department
  2. National Natural Science Foundation of China
  3. Yunnan Provincial Department of Education
  4. [202002AA10007]
  5. [61462078]
  6. [31860332]
  7. [2022Y558]

Ask authors/readers for more resources

Birds, as important members of the ecosystem, are good indicators of the ecological environment. This paper proposes a birdsong classification model that combines deep learning and machine learning by utilizing multi-view features. The experimental results show that this method achieves higher accuracy and lower dimensionality in birdsong recognition.
As important members of the ecosystem, birds are good monitors of the ecological environment. Bird recogni-tion, especially birdsong recognition, has attracted more and more attention in the field of artificial intelligence. At present, traditional machine learning and deep learning are widely used in birdsong recognition. Deep learning can not only classify and recognize the spectrums of birdsong, but also be used as a feature extractor. Machine learning is often used to classify and recognize the extracted birdsong handcrafted feature parameters. As the data samples of the classifier, the feature of birdsong directly determines the performance of the classifier. Multi-view features from different methods of feature extraction can obtain more perfect information of bird -song. Therefore, aiming at enriching the representational capacity of single feature and getting a better way to combine features, this paper proposes a birdsong classification model based multi-view features, which combines the deep features extracted by convolutional neural network (CNN) and handcrafted features. Firstly, four kinds of handcrafted features are extracted. Those are wavelet transform (WT) spectrum, Hilbert-Huang transform (HHT) spectrum, short-time Fourier transform (STFT) spectrum and Mel-frequency cepstral coefficients (MFCC). Then CNN is used to extract the deep features from WT, HHT and STFT spectrum, and the minimal-redundancy -maximal-relevance (mRMR) to select optimal features. Finally, three classification models (random forest, support vector machine and multi-layer perceptron) are built with the deep features and handcrafted features, and the probability of classification results of the two types of features are fused as the new features to recognize birdsong. Taking sixteen species of birds as research objects, the experimental results show that the three classifiers obtain the accuracy of 95.49%, 96.25% and 96.16% respectively for the features of the proposed method, which are better than the seven single features and three fused features involved in the experiment. This proposed method effectively combines the deep features and handcrafted features from the perspectives of signal. The fused features can more comprehensively express the information of the bird audio itself, and have higher classification accuracy and lower dimension, which can effectively improve the performance of bird audio classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available