Woxpas mirror. This repo is a verbatim mirror of
Systran/faster-whisper-smallat commit536b066274, pulled on 2026-05-06. Maintained atwoxpas-ai/whisper-small-ct2by Woxpas for redistribution stability — if the upstream is renamed, removed, or modified, downstream Woxpas runtimes can keep loading from this URL.Original work by SYSTRAN — CTranslate2 conversion of openai/whisper-small, licensed under MIT. Attribution and license terms are preserved unchanged. Pin to the commit SHA above to get the exact bytes mirrored on 2026-05-06; pushing future changes is reserved for org admins of
woxpas-ai.What Woxpas uses this for: voice-note transcription via faster-whisper
Note on
model.bin: despite the name, this is the CTranslate2 binary weights format (a custom packed format), NOT a Python pickle. CTranslate2 weights are not executable.
Whisper small model for CTranslate2
This repository contains the conversion of openai/whisper-small to the CTranslate2 model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.
Example
from faster_whisper import WhisperModel
model = WhisperModel("small")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
Conversion details
The original model was converted with the following command:
ct2-transformers-converter --model openai/whisper-small --output_dir faster-whisper-small \
--copy_files tokenizer.json --quantization float16
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the compute_type option in CTranslate2.
More information
For more information about the original model, see its model card.
- Downloads last month
- 20