4.6 Article

A lightweight speech recognition method with target-swap knowledge distillation for Mandarin air traffic control communications

Journal

PEERJ COMPUTER SCIENCE
Volume 9, Issue -, Pages -

Publisher

PEERJ INC
DOI: 10.7717/peerj-cs.1650

Keywords

Automatic speech recognition; Knowledge distillation; Air traffic control communications; Model compression; Mandarin ASR; Lightweight ASR

Ask authors/readers for more resources

This article introduces knowledge distillation into ASR for Mandarin ATC communications to enhance the generalization performance of the lightweight model. By using the Target-Swap Knowledge Distillation (TSKD) strategy, the potential overconfidence of the teacher model regarding the target class can be mitigated. Experimental results demonstrate that the generated lightweight ASR model achieves a balance between recognition accuracy and transcription latency.
Miscommunications between air traffic controllers (ATCOs) and pilots in air traffic control (ATC) may lead to catastrophic aviation accidents. Thanks to advances in speech and language processing, automatic speech recognition (ASR) is an appealing approach to prevent misunderstandings. To allow ATCOs and pilots sufficient time to respond instantly and effectively, the ASR systems for ATC must have both superior recognition performance and low transcription latency. However, most existing ASR works for ATC are primarily concerned with recognition performance while paying little attention to recognition speed, which motivates the research in this article. To address this issue, this article introduces knowledge distillation into the ASR for Mandarin ATC communications to enhance the generalization performance of the light model. Specifically, we propose a simple yet effective lightweight strategy, named Target-Swap Knowledge Distillation (TSKD), which swaps the logit output of the teacher and student models for the target class. It can mitigate the potential overconfidence of the teacher model regarding the target class and enable the student model to concentrate on the distillation of knowledge from non-target classes. Extensive experiments are conducted to demonstrate the effectiveness of the proposed TSKD in homogeneous and heterogeneous architectures. The experimental results reveal that the generated lightweight ASR model achieves a balance between recognition accuracy and transcription latency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available