Commit
•
7235210
1
Parent(s):
a05d7f5
retain audiocraft usage
Browse files
README.md
CHANGED
@@ -21,17 +21,27 @@ Four checkpoints are released:
|
|
21 |
|
22 |
## Example
|
23 |
|
24 |
-
Try out MusicGen yourself!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
|
27 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
28 |
</a>
|
29 |
|
|
|
|
|
30 |
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
|
31 |
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
32 |
</a>
|
33 |
|
34 |
-
|
35 |
|
36 |
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
37 |
|
@@ -79,6 +89,38 @@ scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values
|
|
79 |
|
80 |
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
## Model details
|
83 |
|
84 |
**Organization developing the model:** The FAIR team of Meta AI.
|
|
|
21 |
|
22 |
## Example
|
23 |
|
24 |
+
Try out MusicGen yourself!
|
25 |
+
|
26 |
+
* Audiocraft Colab:
|
27 |
+
|
28 |
+
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
|
29 |
+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
30 |
+
</a>
|
31 |
+
|
32 |
+
* Hugging Face Colab:
|
33 |
|
34 |
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
|
35 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
36 |
</a>
|
37 |
|
38 |
+
* Hugging Face Demo:
|
39 |
+
|
40 |
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
|
41 |
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
42 |
</a>
|
43 |
|
44 |
+
## 🤗 Transformers Usage
|
45 |
|
46 |
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
47 |
|
|
|
89 |
|
90 |
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
|
91 |
|
92 |
+
## Audiocraft Usage
|
93 |
+
|
94 |
+
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
|
95 |
+
|
96 |
+
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
|
97 |
+
```
|
98 |
+
pip install git+https://github.com/facebookresearch/audiocraft.git
|
99 |
+
```
|
100 |
+
|
101 |
+
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
|
102 |
+
```
|
103 |
+
apt get install ffmpeg
|
104 |
+
```
|
105 |
+
|
106 |
+
3. Run the following Python code:
|
107 |
+
|
108 |
+
```py
|
109 |
+
from audiocraft.models import MusicGen
|
110 |
+
from audiocraft.data.audio import audio_write
|
111 |
+
|
112 |
+
model = MusicGen.get_pretrained("small")
|
113 |
+
model.set_generation_params(duration=8) # generate 8 seconds.
|
114 |
+
|
115 |
+
descriptions = ["happy rock", "energetic EDM"]
|
116 |
+
|
117 |
+
wav = model.generate(descriptions) # generates 2 samples.
|
118 |
+
|
119 |
+
for idx, one_wav in enumerate(wav):
|
120 |
+
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
|
121 |
+
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
|
122 |
+
```
|
123 |
+
|
124 |
## Model details
|
125 |
|
126 |
**Organization developing the model:** The FAIR team of Meta AI.
|