Edit model card

mlx-community/ChatMusician-hf-4bit-mlx

This model was converted to MLX format from m-a-p/ChatMusician. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
import re
from string import Template
from mlx_lm.utils import generate, load

prompt_template = Template("Human: ${inst} </s> Assistant: ")


model, tokenizer = load("mlx-community/ChatMusician-hf-4bit-mlx")

instruction = """"Develop a musical piece using the given chord progression.
'Dm', 'C', 'Dm', 'Dm', 'C', 'Dm', 'C', 'Dm'
"""
prompt = prompt_template.safe_substitute({"inst": instruction})

response = generate(
    model=model,
    tokenizer=tokenizer,
    prompt=prompt,
    temp=0.6,
    top_p=0.9,
    max_tokens=1000,
    repetition_penalty=1.1,
)

# pip install symusic
from symusic import Score, Synthesizer
import wave

abc_pattern = r"(X:\d+\n(?:[^\n]*\n)+)"
abc_notation = re.findall(abc_pattern, response + "\n")[0]
s = Score.from_abc(abc_notation)
audio = Synthesizer().render(s, stereo=True)

sample_rate = 44100
audio = (audio * 32767).astype("int16")
with wave.open("cm_music_piece.wav", "w") as wf:
    wf.setnchannels(2)
    wf.setsampwidth(2)
    wf.setframerate(sample_rate)
    wf.writeframes(audio.tobytes())
Downloads last month
4
Safetensors
Model size
1.16B params
Tensor type
FP16
·
U32
·
Inference Examples
Inference API (serverless) does not yet support mlx models for this pipeline type.