micro-musicgen-acid

Curated and trained by Aaron Abebe.

image/png

WARNING: These models WILL sound bad to a lot of people. The goal is not create pleasant sounding music, but to spark creativity by using the weird sounds of Neural Codecs for music production and sampling!

Micro-Musicgen is a new family of super small music generation models focussing on experimental music and latent space exploration capabilities. These models have unique abilities and drawbacks which should enhance creativity when working with them while creating music.

  • only unconditional generation: Trained without text-conditioning to reduce model size.
  • very fast generation times: ~8secs for 10x 10sec samples.
  • permissive licensing: The models are trained from scratch using royalty-free samples and handmade chops, which allows them to be released via the MIT License.

This is the second entry in the series and is called micro-musicgen-acid. It's trained on different 303 sample packs as well as audio I made with my Behringer TD-3.

If you find this model interesting, please consider:

Samples

All samples are from a single run, without cherry picking.

Benchmarks

Usage

Install my audiocraft fork:

pip install -U git+https://github.com/aaronabebe/audiocraft#egg=audiocraft

Then, you should be able to load this model just like any other musicgen checkpoint here on the Hub:

import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained('pharoAIsanders420/micro-musicgen-acid')
model.set_generation_params(duration=10)
wav = model.generate_unconditional(10)

for idx, one_wav in enumerate(wav):
    # Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
    audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)

Dataset

I sourced the datasets from these royalty free sources:

Downloads last month
11
Inference API
Unable to determine this model’s pipeline type. Check the docs .