readme update
Browse files
README.md
CHANGED
@@ -12,17 +12,31 @@ DAC is the state-of-the-art audio tokenizer with improvement upon the previous t
|
|
12 |
This model card provides an easy-to-use API for a *pretrained DAC* [1] for 44.1khz audio whose backbone and pretrained weights are from [its original reposotiry](https://github.com/descriptinc/descript-audio-codec). With this API, you can encode and decode by a single line of code either using CPU or GPU. Furhtermore, it supports chunk-based processing for memory-efficient processing, especially important for GPU processing.
|
13 |
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
### Model variations
|
16 |
There are three types of model depending on an input audio sampling rate.
|
17 |
|
18 |
| Model | Input audio sampling rate [khz] |
|
19 |
| ------------------ | ----------------- |
|
20 |
-
| [`hance-ai/descript-audio-codec-44khz`](https://huggingface.co/hance-ai/descript-audio-codec-
|
21 |
| [`hance-ai/descript-audio-codec-24khz`](https://huggingface.co/hance-ai/descript-audio-codec-24khz) | 24khz |
|
22 |
| [`hance-ai/descript-audio-codec-16khz`](https://huggingface.co/hance-ai/descript-audio-codec-16khz) | 16khz |
|
23 |
|
24 |
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
# Usage
|
27 |
|
28 |
### Load
|
@@ -39,7 +53,7 @@ model.to(device)
|
|
39 |
|
40 |
### Encode
|
41 |
```python
|
42 |
-
audio_filename =
|
43 |
zq, s = model.encode(audio_filename)
|
44 |
```
|
45 |
`zq` is discrete embeddings with dimension of (1, num_RVQ_codebooks, token_length) and `s` is a token sequence with dimension of (1, num_RVQ_codebooks, token_length).
|
@@ -66,19 +80,29 @@ loaded_s = model.load_tensor('tokens.pt')
|
|
66 |
```
|
67 |
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
# Runtime
|
70 |
|
71 |
To give you a brief idea, the following table reports average runtime on CPU and GPU to encode and decode 10s audio. The runtime is measured in second. The used CPU is Intel Core i9 11900K and GPU is RTX3060.
|
72 |
-
```
|
73 |
| Task | CPU | GPU |
|
74 |
-
|
75 |
| Encoding | 6.71 | 0.19 |
|
76 |
| Decoding | 15.4 | 0.31 |
|
77 |
-
```
|
78 |
The decoding process takes a longer simply because the decoder is larger than the encoder.
|
79 |
|
80 |
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
# Technical Discussion
|
83 |
|
84 |
### Chunk-based Processing
|
|
|
12 |
This model card provides an easy-to-use API for a *pretrained DAC* [1] for 44.1khz audio whose backbone and pretrained weights are from [its original reposotiry](https://github.com/descriptinc/descript-audio-codec). With this API, you can encode and decode by a single line of code either using CPU or GPU. Furhtermore, it supports chunk-based processing for memory-efficient processing, especially important for GPU processing.
|
13 |
|
14 |
|
15 |
+
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
### Model variations
|
23 |
There are three types of model depending on an input audio sampling rate.
|
24 |
|
25 |
| Model | Input audio sampling rate [khz] |
|
26 |
| ------------------ | ----------------- |
|
27 |
+
| [`hance-ai/descript-audio-codec-44khz`](https://huggingface.co/hance-ai/descript-audio-codec-44khz) | 44.1khz |
|
28 |
| [`hance-ai/descript-audio-codec-24khz`](https://huggingface.co/hance-ai/descript-audio-codec-24khz) | 24khz |
|
29 |
| [`hance-ai/descript-audio-codec-16khz`](https://huggingface.co/hance-ai/descript-audio-codec-16khz) | 16khz |
|
30 |
|
31 |
|
32 |
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
# Usage
|
41 |
|
42 |
### Load
|
|
|
53 |
|
54 |
### Encode
|
55 |
```python
|
56 |
+
audio_filename = 'path/example_audio.wav'
|
57 |
zq, s = model.encode(audio_filename)
|
58 |
```
|
59 |
`zq` is discrete embeddings with dimension of (1, num_RVQ_codebooks, token_length) and `s` is a token sequence with dimension of (1, num_RVQ_codebooks, token_length).
|
|
|
80 |
```
|
81 |
|
82 |
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
|
88 |
+
|
89 |
# Runtime
|
90 |
|
91 |
To give you a brief idea, the following table reports average runtime on CPU and GPU to encode and decode 10s audio. The runtime is measured in second. The used CPU is Intel Core i9 11900K and GPU is RTX3060.
|
|
|
92 |
| Task | CPU | GPU |
|
93 |
+
|-----------------|---------|---------|
|
94 |
| Encoding | 6.71 | 0.19 |
|
95 |
| Decoding | 15.4 | 0.31 |
|
|
|
96 |
The decoding process takes a longer simply because the decoder is larger than the encoder.
|
97 |
|
98 |
|
99 |
|
100 |
+
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
|
106 |
# Technical Discussion
|
107 |
|
108 |
### Chunk-based Processing
|