3.8 Proceedings Paper

A Deep and Recurrent Architecture for Primate Vocalization Classification

Journal

INTERSPEECH 2021
Volume -, Issue -, Pages 461-465

Publisher

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2021-1274

Keywords

Deep Audio Classification; Recurrent Neural Networks

Ask authors/readers for more resources

This research introduces a deep and recurrent neural network architecture for acoustic monitoring of primate vocalizations, achieving a high recall rate on a dataset from an African wildlife sanctuary. The study employs Bayesian optimization to obtain suitable hyperparameters and outperforms baseline models with an ensemble of deep and shallow classifiers.
Wildlife monitoring is an essential part of most conservation efforts where one of the many building blocks is acoustic monitoring. Acoustic monitoring has the advantage of being non-invasive and applicable in areas of high vegetation. In this work, we present a deep and recurrent architecture for the classification of primate vocalizations that is based upon well proven modules such as bidirectional Long Short-Term Memory neural networks, pooling, normalized softmax and focal loss. Additionally, we apply Bayesian optimization to obtain a suitable set of hyperparameters. We test our approach on a recently published dataset of primate vocalizations that were recorded in an African wildlife sanctuary. Using an ensemble of the best five models found during hyperparameter optimization on the development set, we achieve a Unweighted Average Recall of 89.3% on the test set. Our approach outperforms the best baseline, an ensemble of various deep and shallow classifiers, which achieves a UAR of 87.5%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available