reach-vb HF staff commited on
Commit
d3bd7b0
1 Parent(s): ecb62cc

Update README.md (#9)

Browse files

- Update README.md (fe86836c0b933dc24bd5a53e33483eb5696650cf)

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -71,7 +71,7 @@ synthesiser = pipeline("text-to-audio", "facebook/musicgen-medium")
71
 
72
  music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
73
 
74
- scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], music=audio["audio"])
75
  ```
76
 
77
  3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
@@ -122,7 +122,7 @@ pip install git+https://github.com/facebookresearch/audiocraft.git
122
 
123
  2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
124
  ```
125
- apt get install ffmpeg
126
  ```
127
 
128
  3. Run the following Python code:
 
71
 
72
  music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
73
 
74
+ scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
75
  ```
76
 
77
  3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
 
122
 
123
  2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
124
  ```
125
+ apt-get install ffmpeg
126
  ```
127
 
128
  3. Run the following Python code: