Hecheng0625 nielsr HF Staff commited on
Commit
d1495b6
Β·
verified Β·
1 Parent(s): ce635a7

Improve model card: Add paper title, abstract, and project page (#2)

Browse files

- Improve model card: Add paper title, abstract, and project page (df533ba100961034c4e9edfefe88e55093d4a94d)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -6,15 +6,23 @@ language:
6
  - fr
7
  - de
8
  - ko
 
9
  license: apache-2.0
10
  pipeline_tag: text-to-speech
11
  tags:
12
  - Speech-Tokenizer
13
  - Text-to-Speech
14
- library_name: transformers
15
  ---
16
 
17
- # πŸš€ TaDiCodec
 
 
 
 
 
 
 
 
18
 
19
  We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
20
 
@@ -26,6 +34,10 @@ We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (
26
  [![PyTorch](https://img.shields.io/badge/PyTorch-2.0+-ee4c2c.svg)](https://pytorch.org/)
27
  [![Hugging Face](https://img.shields.io/badge/πŸ€—%20HuggingFace-tadicodec-yellow)](https://huggingface.co/amphion/TaDiCodec)
28
 
 
 
 
 
29
  # πŸ€— Pre-trained Models
30
 
31
  ## πŸ“¦ Model Zoo - Ready to Use!
@@ -47,7 +59,7 @@ We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (
47
  |:-----:|:----:|:---:|:---------------:|:-------------:|
48
  | **πŸ€– TaDiCodec-TTS-AR-Qwen2.5-0.5B** | AR | Qwen2.5-0.5B-Instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--0.5B-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B) | βœ… |
49
  | **πŸ€– TaDiCodec-TTS-AR-Qwen2.5-3B** | AR | Qwen2.5-3B-Instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--3B-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-3B) | βœ… |
50
- | **πŸ€– TaDiCodec-TTS-AR-Phi-3.5-4B** | AR | Phi-3.5-mini-instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--4B-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Phi-3.5-4B) | 🚧 |
51
  | **🌊 TaDiCodec-TTS-MGM** | MGM | - | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--MGM-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-MGM) | βœ… |
52
 
53
  ## πŸ”§ Quick Model Usage
 
6
  - fr
7
  - de
8
  - ko
9
+ library_name: transformers
10
  license: apache-2.0
11
  pipeline_tag: text-to-speech
12
  tags:
13
  - Speech-Tokenizer
14
  - Text-to-Speech
 
15
  ---
16
 
17
+ # TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling
18
+
19
+ This model is associated with the paper [TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling](https://arxiv.org/abs/2508.16790).
20
+
21
+ ## Abstract
22
+
23
+ Speech tokenizers serve as foundational components for speech language models, yet current designs exhibit several limitations, including: 1) dependence on multi-layer residual vector quantization structures or high frame rates, 2) reliance on auxiliary pre-trained models for semantic distillation, and 3) requirements for complex two-stage training processes. In this work, we introduce the Text-aware Diffusion Transformer Speech Codec (TaDiCodec), a novel approach designed to overcome these challenges. TaDiCodec employs end-to-end optimization for quantization and reconstruction through a diffusion autoencoder, while integrating text guidance into the diffusion decoder to enhance reconstruction quality and achieve optimal compression. TaDiCodec achieves an extremely low frame rate of 6.25 Hz and a corresponding bitrate of 0.0875 kbps with a single-layer codebook for 24 kHz speech, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS). Notably, TaDiCodec employs a single-stage, end-to-end training paradigm, and obviating the need for auxiliary pre-trained models. We also validate the compatibility of TaDiCodec in language model based zero-shot text-to-speech with both autoregressive modeling and masked generative modeling, demonstrating its effectiveness and efficiency for speech language modeling, as well as a significantly small reconstruction-generation gap. We will open source our code and model checkpoints. Audio samples are are available at https:/tadicodec.github.io/ . We release code and model checkpoints at https:/github.com/HeCheng0625/Diffusion-Speech-Tokenizer .
24
+
25
+ ## πŸš€ TaDiCodec
26
 
27
  We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
28
 
 
34
  [![PyTorch](https://img.shields.io/badge/PyTorch-2.0+-ee4c2c.svg)](https://pytorch.org/)
35
  [![Hugging Face](https://img.shields.io/badge/πŸ€—%20HuggingFace-tadicodec-yellow)](https://huggingface.co/amphion/TaDiCodec)
36
 
37
+ ## Project Page
38
+
39
+ Audio samples and a demo are available on the project page: [https://tadicodec.github.io/](https://tadicodec.github.io/)
40
+
41
  # πŸ€— Pre-trained Models
42
 
43
  ## πŸ“¦ Model Zoo - Ready to Use!
 
59
  |:-----:|:----:|:---:|:---------------:|:-------------:|
60
  | **πŸ€– TaDiCodec-TTS-AR-Qwen2.5-0.5B** | AR | Qwen2.5-0.5B-Instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--0.5B-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B) | βœ… |
61
  | **πŸ€– TaDiCodec-TTS-AR-Qwen2.5-3B** | AR | Qwen2.5-3B-Instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--3B-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-3B) | βœ… |
62
+ | **πŸ€– TaDiCodec-TTS-AR-Phi-3.5-4B** | AR | Phi-3.5-mini-instruct | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--AR--4B-yellow)](https://huggingface.co/amphion/TaDiCodec-AR-Phi-3.5-4B) | 🚧 |
63
  | **🌊 TaDiCodec-TTS-MGM** | MGM | - | [![HF](https://img.shields.io/badge/πŸ€—%20HF-TaDiCodec--MGM-yellow)](https://huggingface.co/amphion/TaDiCodec-TTS-MGM) | βœ… |
64
 
65
  ## πŸ”§ Quick Model Usage