4.6 Article

AudioLM: A Language Modeling Approach to Audio Generation

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2023.3288409

关键词

Semantics; Acoustics; Training; Computational modeling; Codecs; Predictive models; Task analysis; Computer generated music; speech synthesis

向作者/读者索取更多资源

AudioLM is a framework for high-quality audio generation that maintains long-term consistency. It maps input audio to discrete tokens and treats audio generation as a language modeling task. This approach combines existing audio tokenizers to balance reconstruction quality and long-term structure, and leverages a hybrid tokenization scheme. By training on large corpora of raw audio waveforms, AudioLM learns to generate natural and coherent continuations. It can generate syntactically and semantically plausible speech continuations without any transcript or annotation, and even extend to generating coherent piano music continuations without symbolic representation.
We introduce AudioLM, a framework for high-quality audio generation with long-term consistency. AudioLM maps the input audio to a sequence of discrete tokens and casts audio generation as a language modeling task in this representation space. We show how existing audio tokenizers provide different trade-offs between reconstruction quality and long-term structure, and we propose a hybrid tokenization scheme to achieve both objectives. Namely, we leverage the discretized activations of a masked language model pre-trained on audio to capture long-term structure and the discrete codes produced by a neural audio codec to achieve high-quality synthesis. By training on large corpora of raw audio waveforms, AudioLM learns to generate natural and coherent continuations given short prompts. When trained on speech, and without any transcript or annotation, AudioLM generates syntactically and semantically plausible speech continuations while also maintaining speaker identity and prosody for unseen speakers. Furthermore, we demonstrate how our approach extends beyond speech by generating coherent piano music continuations, despite being trained without any symbolic representation of music.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据