Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,9 @@ We further release a set of stereophonic capable models. Those were fine tuned f
|
|
19 |
from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
|
20 |
the delay pattern.
|
21 |
|
|
|
|
|
|
|
22 |
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
|
23 |
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
|
24 |
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
|
@@ -75,15 +78,15 @@ pip install --upgrade git+https://github.com/huggingface/transformers.git scipy
|
|
75 |
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
|
76 |
|
77 |
```python
|
78 |
-
import scipy
|
79 |
import torch
|
|
|
80 |
from transformers import pipeline
|
81 |
|
82 |
-
synthesiser = pipeline("text-to-audio", "facebook/musicgen-stereo-
|
83 |
|
84 |
-
music = synthesiser("lo-fi music with a soothing melody", forward_params={"
|
85 |
|
86 |
-
|
87 |
```
|
88 |
|
89 |
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
|
|
|
19 |
from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
|
20 |
the delay pattern.
|
21 |
|
22 |
+
Stereophonic sound, also known as stereo, is a technique used to reproduce sound with depth and direction.
|
23 |
+
It uses two separate audio channels played through speakers or headphones arranged so that it sounds like you're listening from different angles.
|
24 |
+
|
25 |
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
|
26 |
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
|
27 |
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
|
|
|
78 |
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
|
79 |
|
80 |
```python
|
|
|
81 |
import torch
|
82 |
+
import soundfile as sf
|
83 |
from transformers import pipeline
|
84 |
|
85 |
+
synthesiser = pipeline("text-to-audio", "facebook/musicgen-stereo-small", device="cuda:0", torch_dtype=torch.float16)
|
86 |
|
87 |
+
music = synthesiser("lo-fi music with a soothing melody", forward_params={"max_new_tokens": 256})
|
88 |
|
89 |
+
sf.write("musicgen_out.wav", music["audio"][0].T, music["sampling_rate"])
|
90 |
```
|
91 |
|
92 |
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
|