Edit model card

MeditationMusicGen

This model is a fine-tuned version of facebooks MusicGen. Refer to https://huggingface.co/facebook/musicgen-small for more details.

πŸ€— Transformers Usage

You can run MeditationMusicGen locally with the πŸ€— Transformers library from version 4.31.0 onwards.

  1. First install the πŸ€— Transformers library and scipy:
pip install --upgrade pip
pip install --upgrade transformers scipy
from transformers import AutoProcessor, MusicgenForConditionalGeneration
import scipy

processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("bfh-genai/meditation-musicgen")

relaxing_description = 'Peaceful meditation background sound'

conditioned_text_input = processor(text=relaxing_description, padding=True, return_tensors="pt")

audio_value = model.generate(**conditioned_text_input, 
                             do_sample=True,
                             guidance_scale=3,  # Value >1, best results achieved with 3.
                             max_new_tokens=256 # 256 ^= 5 seconds of audio.
                             )

scipy.io.wavfile.write(f"my_audio.wav", rate=32_000, data=audio_values[0, 0].numpy())

License: Code is released under MIT, model weights are released under CC-BY-NC 4.0.

Downloads last month
26
Inference Examples
Inference API (serverless) has been turned off for this model.