inference_code_snippet_added
Browse files
README.md
CHANGED
@@ -55,6 +55,21 @@ In order to infer a single audio file using this model, the following code snipp
|
|
55 |
>>> print('Transcription: ', transcribe(audio)["text"])
|
56 |
```
|
57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
## Training and evaluation data
|
59 |
|
60 |
Training Data:
|
|
|
55 |
>>> print('Transcription: ', transcribe(audio)["text"])
|
56 |
```
|
57 |
|
58 |
+
For faster inference of whisper models the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
|
59 |
+
|
60 |
+
```python
|
61 |
+
>>> import jax.numpy as jnp
|
62 |
+
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
|
63 |
+
|
64 |
+
>>> # path to the audio file to be transcribed
|
65 |
+
>>> audio = "/path/to/audio.format"
|
66 |
+
|
67 |
+
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-kannada-medium", batch_size=16)
|
68 |
+
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="kn", task="transcribe")
|
69 |
+
|
70 |
+
>>> print('Transcription: ', transcribe(audio)["text"])
|
71 |
+
```
|
72 |
+
|
73 |
## Training and evaluation data
|
74 |
|
75 |
Training Data:
|