MusicGen_Plus_hfv2 / README.md
ClownOfMadness
v1.2.7
b354d80
|
raw
history blame
8.36 kB
metadata
title: MusicGen+ V1.2.7 (HuggingFace Version)
emoji: 🎼
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 3.35.2
app_file: app.py
pinned: true

Audiocraft Plus

docs badge linter badge tests badge

Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model.

MusicGen+ An All-in-one Webui for MusicGen

image

MusicGen+ is an extension based on the original MusicGen

Audiocraft provides the code and models for MusicGen, a simple and controllable model for music generation. MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. Check out our sample page or test the available demo!

Open In Colab Open in HugginFace

We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data.

Installation

Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following:

# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
pip install 'torch>=2.0'
# Then proceed to one of the following
pip install -U audiocraft  # stable release
pip install -U git+https://git@github.com/GrandaddyShmax/audiocraft_plus#egg=audiocraft
pip install -e .  # or if you cloned the repo locally

Usage

We offer a number of way to interact with MusicGen+:

  1. MusicGen+ is also available on the GrandaddyShmax/MusicGen_Plus HuggingFace Space,
  2. You can run MusicGen+ on a Colab: colab notebook.
  3. You can use the gradio MusicGen+ locally by running python app.py.
  4. Checkout @camenduru Colab page which is regularly updated with contributions from @camenduru and the community.
  5. Finally, MusicGen is available in 🤗 Transformers from v4.31.0 onwards, see section 🤗 Transformers Usage below.

API

We provide a simple API and 4 pre-trained models. The pre trained models are:

We observe the best trade-off between quality and compute with the medium or melody model. In order to use MusicGen locally you must have a GPU. We recommend 16GB of memory, but smaller GPUs will be able to generate short sequences, or longer sequences with the small model.

Note: Please make sure to have ffmpeg installed when using newer version of torchaudio. You can install it with:

apt-get install ffmpeg

See after a quick example for using the API.

import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write

model = MusicGen.get_pretrained('melody')
model.set_generation_params(duration=8)  # generate 8 seconds.
wav = model.generate_unconditional(4)    # generates 4 unconditional audio samples
descriptions = ['happy rock', 'energetic EDM', 'sad jazz']
wav = model.generate(descriptions)  # generates 3 samples.

melody, sr = torchaudio.load('./assets/bach.mp3')
# generates using the melody from the given audio and the provided descriptions.
wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr)

for idx, one_wav in enumerate(wav):
    # Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
    audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)

🤗 Transformers Usage

MusicGen is available in the 🤗 Transformers library from version 4.31.0 onwards, requiring minimal dependencies and additional packages. Steps to get started:

  1. First install the 🤗 Transformers library from main:
pip install git+https://github.com/huggingface/transformers.git
  1. Run the following Python code to generate text-conditional audio samples:
from transformers import AutoProcessor, MusicgenForConditionalGeneration


processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")

inputs = processor(
    text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
    padding=True,
    return_tensors="pt",
)

audio_values = model.generate(**inputs, max_new_tokens=256)
  1. Listen to the audio samples either in an ipynb notebook:
from IPython.display import Audio

sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)

Or save them as a .wav file using a third-party library, e.g. scipy:

import scipy

sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())

For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the MusicGen docs or the hands-on Google Colab.

Model Card

See the model card page.

FAQ

Will the training code be released?

Yes. We will soon release the training code for MusicGen and EnCodec.

I need help on Windows

@FurkanGozukara made a complete tutorial for Audiocraft/MusicGen on Windows

I need help for running the demo on Colab

Check @camenduru tutorial on Youtube.

Citation

@article{copet2023simple,
      title={Simple and Controllable Music Generation},
      author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
      year={2023},
      journal={arXiv preprint arXiv:2306.05284},
}

License

  • The code in this repository is released under the MIT license as found in the LICENSE file.
  • The weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.