asahi417 commited on
Commit
80d6747
·
verified ·
1 Parent(s): 03c871c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ tags:
4
+ - audio
5
+ - automatic-speech-recognition
6
+ license: mit
7
+ library_name: ctranslate2
8
+ ---
9
+
10
+ # Whisper kotoba-whisper-bilingual-v1.0 model for CTranslate2
11
+
12
+ This repository contains the conversion of [kotoba-tech/kotoba-whisper-bilingual-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0)
13
+ to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
14
+
15
+ This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
16
+
17
+ ## Example
18
+ Install library and download sample audio.
19
+ ```shell
20
+ pip install faster-whisper
21
+ wget https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/resolve/main/sample_ja_speech.wav
22
+ ```
23
+ Inference with the kotoba-whisper-v2.0-faster.
24
+
25
+ ```python
26
+ from faster_whisper import WhisperModel
27
+
28
+ model = WhisperModel("kotoba-tech/kotoba-whisper-v2.0-faster")
29
+
30
+ segments, info = model.transcribe("sample_ja_speech.wav", language="ja", chunk_length=15, condition_on_previous_text=False)
31
+ for segment in segments:
32
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
33
+ ```
34
+
35
+ ### Benchmark
36
+ We measure the inference speed of different kotoba-whisper-v2.0 implementations with four different Japanese speech audio on MacBook Pro with the following spec:
37
+ - Apple M2 Pro
38
+ - 32GB
39
+ - 14-inch, 2023
40
+ - OS Sonoma Version 14.4.1 (23E224)
41
+
42
+ | audio file | audio duration (min)| [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml) (sec) | [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-faster) (sec)| [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0) (sec)
43
+ |--------|------|-----|------|-----|
44
+ |audio 1 | 50.3 | 581 | 2601 | 807 |
45
+ |audio 2 | 5.6 | 41 | 73 | 61 |
46
+ |audio 3 | 4.9 | 30 | 141 | 54 |
47
+ |audio 4 | 5.6 | 35 | 126 | 69 |
48
+
49
+ Scripts to re-run the experiment can be found bellow:
50
+ * [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml/blob/main/benchmark.sh)
51
+ * [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-faster/blob/main/benchmark.sh)
52
+ * [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0/blob/main/benchmark.sh)
53
+ Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
54
+ and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
55
+ found better than the sequnential long-form decoding.
56
+
57
+
58
+
59
+ ## Conversion details
60
+
61
+ The original model was converted with the following command:
62
+
63
+ ```
64
+ ct2-transformers-converter --model kotoba-tech/kotoba-whisper-bilingual-v1.0 --output_dir kotoba-whisper-bilingual-v1.0-faster --quantization float16
65
+ ```
66
+
67
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
68
+
69
+ ## More information
70
+
71
+ For more information about the kotoba-whisper-v2.0, refer to the original [model card](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0).