This is a d-Matrix functional reference of the whisper-large-v3-turbo model. The reference provides the following functional configurations:

Configuration Explanation
BASELINE a reference functionally equivalent to the original model
BASIC all linear algebraic operands quantized to MXINT8-64, and all other operations transformed to approximated kernel simulations

Usage

Install d-Matrix Dmx_Compressor first.

pip install dmx_compressor

The following is an example model and its evaluation.

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
from dmx.compressor.modeling import DmxModel


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model_id = "d-matrix/whisper-large-v3-turbo"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    torch_dtype=torch_dtype,
    device=device,
)

dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
shorter_audio = sample["array"][:1000]

pipe.model = DmxModel.from_torch(pipe.model)

result = pipe(shorter_audio)
print(result["text"])
Downloads last month
18
Safetensors
Model size
809M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.