3.8 Proceedings Paper

Robust Acoustic Scene Classification to Multiple Devices Using Maximum Classifier Discrepancy and Knowledge Distillation

Publisher

IEEE

Keywords

acoustic scene classification; domain adaptation; maximum classifier discrepancy; convolutional neural network; knowledge distillation

Ask authors/readers for more resources

This paper proposes a robust acoustic scene classification (ASC) method for multiple devices using maximum classifier discrepancy (MCD) and knowledge distillation (KD), which employs domain adaptation to train device-specific ASC models and combines them into a multi-domain ASC model. The proposed method aligns class distributions using MCD for domain adaptation and improves ASC accuracy for both target and non-target devices.
This paper proposes robust acoustic scene classification (ASC) to multiple devices using maximum classifier discrepancy (MCD) and knowledge distillation (KD). The proposed method employs domain adaptation to train multiple ASC models dedicated to each device and combines these multiple device-specific models using a KD technique into a multi-domain ASC model. For domain adaptation, the proposed method utilizes MCD to align class distributions that conventional DA for ASC methods have ignored. The multi-device robust ASC model is obtained by KD, combining the multiple device-specific ASC models by MCD that may have a lower performance for nontarget devices. Our experiments show that the proposed MCDbased device-specific model improved ASC accuracy by at most 12.22% for target samples, and the proposed KD-based device-general model improved ASC accuracy by 2.13% on average for all devices.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available