Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ tags:
|
|
| 8 |
---
|
| 9 |
|
| 10 |
|
| 11 |
-
# DRAFT:
|
| 12 |
|
| 13 |
|
| 14 |
The X-Codec2 model was proposed in [Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis](https://huggingface.co/papers/2502.04128).
|
|
@@ -24,7 +24,12 @@ This checkpoint was contributed by [Eric Bezzam](https://huggingface.co/bezzam)
|
|
| 24 |
The original code can be found [here](https://github.com/zhenye234/X-Codec-2.0).
|
| 25 |
|
| 26 |
|
| 27 |
-
## Usage example
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
Here is a quick example of how to encode and decode an audio using this model:
|
| 30 |
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
|
| 11 |
+
# DRAFT: X-Codec2 (Transformers-compatible version)
|
| 12 |
|
| 13 |
|
| 14 |
The X-Codec2 model was proposed in [Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis](https://huggingface.co/papers/2502.04128).
|
|
|
|
| 24 |
The original code can be found [here](https://github.com/zhenye234/X-Codec-2.0).
|
| 25 |
|
| 26 |
|
| 27 |
+
## Usage example
|
| 28 |
+
|
| 29 |
+
Until X-Codec2 is merged into Transformers, it can be used by installing from the following fork:
|
| 30 |
+
```
|
| 31 |
+
pip install git+https://github.com/ebezzam/transformers.git@add-xcodec2
|
| 32 |
+
```
|
| 33 |
|
| 34 |
Here is a quick example of how to encode and decode an audio using this model:
|
| 35 |
|