jvh commited on
Commit
5d55e6d
1 Parent(s): f205e54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -117,8 +117,10 @@ From my early tests:
117
  - Much less GPU memory required
118
  - It seems that performance is on par with the original
119
  - It seems that this combination is faster than just using the CTranslate2 int8 quantization.
 
120
  Quantization method TBA.
121
  To use this model use the faster_whisper module as stated in [the original faster-whisper model](https://huggingface.co/Systran/faster-whisper-large-v3)
 
122
 
123
  Any benchmark results are appreciated. I probably do not have time to do it myself.
124
 
 
117
  - Much less GPU memory required
118
  - It seems that performance is on par with the original
119
  - It seems that this combination is faster than just using the CTranslate2 int8 quantization.
120
+
121
  Quantization method TBA.
122
  To use this model use the faster_whisper module as stated in [the original faster-whisper model](https://huggingface.co/Systran/faster-whisper-large-v3)
123
+ Or use [WhisperX](https://github.com/m-bain/whisperX), this is what I used for my small tests (do not forget to set dtype to int8).
124
 
125
  Any benchmark results are appreciated. I probably do not have time to do it myself.
126