3.8 Proceedings Paper

IMPROVING MANDARIN TONE MISPRONUNCIATION DETECTION FOR NON-NATIVE LEARNERS WITH SOFT-TARGET TONE LABELS AND BLSTM-BASED DEEP MODELS

Publisher

IEEE

Keywords

Computer assistant language learning (CALL); computer assisted pronunciation training (CAPT); tone recognition and mispronunciation detection; deep learning

Funding

  1. China Scholarship Council
  2. NFR AULUS project

Ask authors/readers for more resources

We propose three techniques to improve mispronunciation detection of Mandarin tones of second language (L2) learners using tone-based extended recognition network (ERN). First, we extend our model from deep neural network (DNN) to bidirectional long-short-term memory (BLSTM) in order to model tone-level co-articulation influenced by a broader temporal context (e.g., two or three consecutive Mandarin syllables). Second, we relax the hard labels to characterize the situations when a single tone class label is not enough because L2 learners' pronunciations are often between two canonical tone categories. Therefore, soft targets (a probabilistic transcription) are proposed for acoustic model training in place of conventional hard targets (one-hot targets). Third, we average tone scores produced by BLSTM models trained with hard and soft targets to seek the complementarity from modeling at the tone-target levels. Compared to our previous system based on the DNN-trained ERNs, the BLSTM-trained system with soft targets reduces the equal error rate (ERR) from 5.77% to 4.86%, and system combination decreases EER further to 4.34%, achieving a 24.78% relative error reduction.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available