sanchit-gandhi HF staff commited on
Commit
8bbf533
1 Parent(s): 59dbc7e

Update Transformers code example

Browse files

cc

@reach-vb

Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -106,22 +106,23 @@ inputs = processor(
106
  audio_values = model.generate(**inputs, max_new_tokens=256)
107
  ```
108
 
109
- 3. Listen to the audio samples either in an ipynb notebook:
110
 
111
  ```python
112
  from IPython.display import Audio
113
 
114
  sampling_rate = model.config.audio_encoder.sampling_rate
115
- Audio(audio_values[0].numpy(), rate=sampling_rate)
116
  ```
117
 
118
- Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
119
 
120
  ```python
121
- import scipy
122
 
123
  sampling_rate = model.config.audio_encoder.sampling_rate
124
- scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
 
125
  ```
126
 
127
  For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
 
106
  audio_values = model.generate(**inputs, max_new_tokens=256)
107
  ```
108
 
109
+ 4. Listen to the audio samples either in an ipynb notebook:
110
 
111
  ```python
112
  from IPython.display import Audio
113
 
114
  sampling_rate = model.config.audio_encoder.sampling_rate
115
+ Audio(audio_values[0].cpu().numpy(), rate=sampling_rate)
116
  ```
117
 
118
+ Or save them as a `.wav` file using a third-party library, e.g. `soundfile`:
119
 
120
  ```python
121
+ import soundfile as sf
122
 
123
  sampling_rate = model.config.audio_encoder.sampling_rate
124
+ audio_values = audio_values.cpu().numpy()
125
+ sf.write("musicgen_out.wav", audio_values[0].T, sampling_rate)
126
  ```
127
 
128
  For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).