4.8 Article Proceedings Paper

Deep Audio-Visual Speech Recognition

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2018.2889052

关键词

Hidden Markov models; Lips; Speech recognition; Visualization; Videos; Feeds; Training; Lip reading; audio visual speech recognition; deep learning

资金

  1. EPSRC Programme Grant [Seebibyte EP/M013774/1]
  2. EPSRC CDT in Autonomous Intelligent Machines and Systems
  3. Oxford-Google DeepMind Graduate Scholarship

向作者/读者索取更多资源

This work aims to recognize phrases and sentences spoken by a talking face, and compares two lip reading models while investigating their complementarity to audio speech recognition. The introduction of a new dataset and the superior performance of the trained models on a lip reading benchmark dataset are the key contributions.
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据