Update README.md
Browse files
README.md
CHANGED
@@ -46,9 +46,10 @@ We measure the inference speed of different kotoba-whisper-v2.0 implementations
|
|
46 |
|audio 4 | 5.6 | 35 | 126 | 69 |
|
47 |
|
48 |
Scripts to re-run the experiment can be found bellow:
|
49 |
-
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
50 |
-
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
51 |
-
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
|
|
52 |
Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
|
53 |
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
|
54 |
found better than the sequnential long-form decoding.
|
|
|
46 |
|audio 4 | 5.6 | 35 | 126 | 69 |
|
47 |
|
48 |
Scripts to re-run the experiment can be found bellow:
|
49 |
+
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/blob/main/benchmark.sh)
|
50 |
+
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-faster/blob/main/benchmark.sh)
|
51 |
+
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0/blob/main/benchmark.sh)
|
52 |
+
|
53 |
Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
|
54 |
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
|
55 |
found better than the sequnential long-form decoding.
|