imTak commited on
Commit
d111abf
·
verified ·
1 Parent(s): 92caba8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ base_model:
5
+ - imTak/whisper_large_v3_turbo_korean_Economy
6
+ ---
7
+ # Whisper large-v3 model for CTranslate2
8
+
9
+ This repository contains the conversion of [imTak/whisper_large_v3_turbo_korean_Economy](https://huggingface.co/imTak/whisper_large_v3_turbo_korean_Economy) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
10
+
11
+ This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
12
+
13
+ ## Example
14
+
15
+ ```python
16
+ from faster_whisper import WhisperModel
17
+ model = WhisperModel("imTak/whisper_large_v3_turbo_korean_Economy")
18
+ segments, info = model.transcribe("audio.mp3")
19
+ for segment in segments:
20
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
21
+ ```
22
+
23
+ ## Conversion details
24
+
25
+ The original model was converted with the following command:
26
+
27
+ ```
28
+ ct2-transformers-converter --model imTak/whisper_large_v3_turbo_korean_Economy --output_dir faster-whisper_large_v3_turbo_korean_Economy \
29
+ --copy_files tokenizer_config.json preprocessor_config.json --quantization float16
30
+ ```
31
+
32
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
33
+
34
+ ## More information
35
+
36
+ **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**