Instructions to use UsefulSensors/moonshine-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UsefulSensors/moonshine-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="UsefulSensors/moonshine-base")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("UsefulSensors/moonshine-base") model = AutoModelForSpeechSeq2Seq.from_pretrained("UsefulSensors/moonshine-base") - Notebooks
- Google Colab
- Kaggle
GGUF + pure-C++ runtime in CrispASR — Moonshine base
#6
by cstr - opened
We've added Moonshine-base to CrispASR — same moonshine backend as tiny, just dispatched on GGUF metadata (moonshine-impl.h shared between sizes).
Runtime details:
- Conv stem + 8L transformer encoder + 8L decoder (416d, partial RoPE, SiLU).
- KV-cached autoregressive decode with flash attention.
- Companion-file mechanism for
tokenizer.binon auto-download.
Pre-quantised GGUFs (MIT): cstr/moonshine-base-GGUF — plus the per-language variants we converted: cstr/moonshine-base-{ja,ko,zh,ar,vi,uk}-GGUF.
./build/bin/crispasr --backend moonshine -m moonshine-base-q4_k.gguf -f audio.wav -osrt
Companion tiny size: cstr/moonshine-tiny-GGUF. Streaming variants: cstr/moonshine-streaming-{tiny,small,medium}-GGUF (separate moonshine-streaming backend).