reach-vb HF staff commited on
Commit
5a2e1e9
•
1 Parent(s): 324dd73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -8
README.md CHANGED
@@ -46,17 +46,30 @@ Try out MusicGen yourself!
46
 
47
  You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
48
 
49
- 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
50
 
51
  ```
52
- pip install git+https://github.com/huggingface/transformers.git
 
53
  ```
54
 
55
- 2. Run the following Python code to generate text-conditional audio samples:
56
 
57
- ```py
58
- from transformers import AutoProcessor, MusicgenForConditionalGeneration
 
59
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
62
  model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
@@ -70,9 +83,9 @@ inputs = processor(
70
  audio_values = model.generate(**inputs, max_new_tokens=256)
71
  ```
72
 
73
- 3. Listen to the audio samples either in an ipynb notebook:
74
 
75
- ```py
76
  from IPython.display import Audio
77
 
78
  sampling_rate = model.config.audio_encoder.sampling_rate
@@ -81,7 +94,7 @@ Audio(audio_values[0].numpy(), rate=sampling_rate)
81
 
82
  Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
83
 
84
- ```py
85
  import scipy
86
 
87
  sampling_rate = model.config.audio_encoder.sampling_rate
 
46
 
47
  You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
48
 
49
+ 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
50
 
51
  ```
52
+ pip install --upgrade pip
53
+ pip install --upgrade transformers scipy
54
  ```
55
 
56
+ 2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
57
 
58
+ ```python
59
+ from transformers import pipeline
60
+ import scipy
61
 
62
+ synthesiser = pipeline("text-to-audio", "facebook/musicgen-large")
63
+
64
+ music = pipe("lo-fi music with a soothing melody", forward_params={"do_sample": True})
65
+
66
+ scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], music=audio["audio"])
67
+ ```
68
+
69
+ 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
70
+
71
+ ```python
72
+ from transformers import AutoProcessor, MusicgenForConditionalGeneration
73
 
74
  processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
75
  model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
 
83
  audio_values = model.generate(**inputs, max_new_tokens=256)
84
  ```
85
 
86
+ 4. Listen to the audio samples either in an ipynb notebook:
87
 
88
+ ```python
89
  from IPython.display import Audio
90
 
91
  sampling_rate = model.config.audio_encoder.sampling_rate
 
94
 
95
  Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
96
 
97
+ ```python
98
  import scipy
99
 
100
  sampling_rate = model.config.audio_encoder.sampling_rate