musicgen-small / README.md
Xenova's picture
Xenova HF staff
[Automated] Update base model metadata
ed16b5d verified
metadata
base_model: facebook/musicgen-small
library_name: transformers.js
license: cc-by-nc-4.0

https://huggingface.co/facebook/musicgen-small with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

NOTE: MusicGen support is experimental and requires you to install Transformers.js v3 from source.

If you haven't already, you can install the Transformers.js JavaScript library from GitHub using:

npm install xenova/transformers.js#v3

Example: Generate music with Xenova/musicgen-small.

import { AutoTokenizer, MusicgenForConditionalGeneration } from '@xenova/transformers';

// Load tokenizer and model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/musicgen-small');
const model = await MusicgenForConditionalGeneration.from_pretrained('Xenova/musicgen-small', {
  dtype: {
    text_encoder: 'q8',
    decoder_model_merged: 'q8',
    encodec_decode: 'fp32',
  },
});

// Prepare text input
const prompt = 'a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130';
const inputs = tokenizer(prompt);

// Generate audio
const audio_values = await model.generate({
  ...inputs,
  max_new_tokens: 500,
  do_sample: true,
  guidance_scale: 3,
});

// (Optional) Write the output to a WAV file
import wavefile from 'wavefile';
import fs from 'fs';

const wav = new wavefile.WaveFile();
wav.fromScratch(1, model.config.audio_encoder.sampling_rate, '32f', audio_values.data);
fs.writeFileSync('musicgen.wav', wav.toBuffer());

We also released an online demo, which you can try yourself: https://huggingface.co/spaces/Xenova/musicgen-web


Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).