Edit model card

int8 quantized ctranslate2-compatible version of vasista22/whisper-hindi-large-v2. This means the 5.7GB model is compressed into 1.6GB :).

Model created using

ct2-transformers-converter --model /path/to/vasista22/whisper-hindi-large-v2 --output_dir whisper-hindi-large-v2-ct2-int8 --copy_files tokenizer_config.json preprocessor_config.json added_tokens.json special_tokens_map.json --quantization int8

For monospeaker audio, use either of

  1. ctranslate2
  2. faster-whisper

For multispeaker audio with english diarization, use whisperX.

For multispeaker audio with non-english diarization, use whisper-diarization.

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.