JhonVanced commited on
Commit
5716a48
1 Parent(s): 1820dd2

Upload with huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +141 -0
  2. config.json +240 -0
  3. model.bin +3 -0
  4. preprocessor_config.json +14 -0
  5. tokenizer.json +0 -0
  6. vocabulary.json +0 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - de
6
+ - es
7
+ - ru
8
+ - ko
9
+ - fr
10
+ - ja
11
+ - pt
12
+ - tr
13
+ - pl
14
+ - ca
15
+ - nl
16
+ - ar
17
+ - sv
18
+ - it
19
+ - id
20
+ - hi
21
+ - fi
22
+ - vi
23
+ - he
24
+ - uk
25
+ - el
26
+ - ms
27
+ - cs
28
+ - ro
29
+ - da
30
+ - hu
31
+ - ta
32
+ - 'no'
33
+ - th
34
+ - ur
35
+ - hr
36
+ - bg
37
+ - lt
38
+ - la
39
+ - mi
40
+ - ml
41
+ - cy
42
+ - sk
43
+ - te
44
+ - fa
45
+ - lv
46
+ - bn
47
+ - sr
48
+ - az
49
+ - sl
50
+ - kn
51
+ - et
52
+ - mk
53
+ - br
54
+ - eu
55
+ - is
56
+ - hy
57
+ - ne
58
+ - mn
59
+ - bs
60
+ - kk
61
+ - sq
62
+ - sw
63
+ - gl
64
+ - mr
65
+ - pa
66
+ - si
67
+ - km
68
+ - sn
69
+ - yo
70
+ - so
71
+ - af
72
+ - oc
73
+ - ka
74
+ - be
75
+ - tg
76
+ - sd
77
+ - gu
78
+ - am
79
+ - yi
80
+ - lo
81
+ - uz
82
+ - fo
83
+ - ht
84
+ - ps
85
+ - tk
86
+ - nn
87
+ - mt
88
+ - sa
89
+ - lb
90
+ - my
91
+ - bo
92
+ - tl
93
+ - mg
94
+ - as
95
+ - tt
96
+ - haw
97
+ - ln
98
+ - ha
99
+ - ba
100
+ - jw
101
+ - su
102
+ - yue
103
+ tags:
104
+ - audio
105
+ - automatic-speech-recognition
106
+ license: mit
107
+ library_name: ctranslate2
108
+ ---
109
+
110
+ # Whisper large-v3 model for CTranslate2
111
+
112
+ This repository contains the conversion of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
113
+
114
+ This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
115
+
116
+ ## Example
117
+
118
+ ```python
119
+ from faster_whisper import WhisperModel
120
+
121
+ model = WhisperModel("large-v3")
122
+
123
+ segments, info = model.transcribe("audio.mp3")
124
+ for segment in segments:
125
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
126
+ ```
127
+
128
+ ## Conversion details
129
+
130
+ The original model was converted with the following command:
131
+
132
+ ```
133
+ ct2-transformers-converter --model openai/whisper-large-v3 --output_dir faster-whisper-large-v3 \
134
+ --copy_files tokenizer.json preprocessor_config.json --quantization float16
135
+ ```
136
+
137
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
138
+
139
+ ## More information
140
+
141
+ **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
config.json ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 7,
5
+ 0
6
+ ],
7
+ [
8
+ 10,
9
+ 17
10
+ ],
11
+ [
12
+ 12,
13
+ 18
14
+ ],
15
+ [
16
+ 13,
17
+ 12
18
+ ],
19
+ [
20
+ 16,
21
+ 1
22
+ ],
23
+ [
24
+ 17,
25
+ 14
26
+ ],
27
+ [
28
+ 19,
29
+ 11
30
+ ],
31
+ [
32
+ 21,
33
+ 4
34
+ ],
35
+ [
36
+ 24,
37
+ 1
38
+ ],
39
+ [
40
+ 25,
41
+ 6
42
+ ]
43
+ ],
44
+ "lang_ids": [
45
+ 50259,
46
+ 50260,
47
+ 50261,
48
+ 50262,
49
+ 50263,
50
+ 50264,
51
+ 50265,
52
+ 50266,
53
+ 50267,
54
+ 50268,
55
+ 50269,
56
+ 50270,
57
+ 50271,
58
+ 50272,
59
+ 50273,
60
+ 50274,
61
+ 50275,
62
+ 50276,
63
+ 50277,
64
+ 50278,
65
+ 50279,
66
+ 50280,
67
+ 50281,
68
+ 50282,
69
+ 50283,
70
+ 50284,
71
+ 50285,
72
+ 50286,
73
+ 50287,
74
+ 50288,
75
+ 50289,
76
+ 50290,
77
+ 50291,
78
+ 50292,
79
+ 50293,
80
+ 50294,
81
+ 50295,
82
+ 50296,
83
+ 50297,
84
+ 50298,
85
+ 50299,
86
+ 50300,
87
+ 50301,
88
+ 50302,
89
+ 50303,
90
+ 50304,
91
+ 50305,
92
+ 50306,
93
+ 50307,
94
+ 50308,
95
+ 50309,
96
+ 50310,
97
+ 50311,
98
+ 50312,
99
+ 50313,
100
+ 50314,
101
+ 50315,
102
+ 50316,
103
+ 50317,
104
+ 50318,
105
+ 50319,
106
+ 50320,
107
+ 50321,
108
+ 50322,
109
+ 50323,
110
+ 50324,
111
+ 50325,
112
+ 50326,
113
+ 50327,
114
+ 50328,
115
+ 50329,
116
+ 50330,
117
+ 50331,
118
+ 50332,
119
+ 50333,
120
+ 50334,
121
+ 50335,
122
+ 50336,
123
+ 50337,
124
+ 50338,
125
+ 50339,
126
+ 50340,
127
+ 50341,
128
+ 50342,
129
+ 50343,
130
+ 50344,
131
+ 50345,
132
+ 50346,
133
+ 50347,
134
+ 50348,
135
+ 50349,
136
+ 50350,
137
+ 50351,
138
+ 50352,
139
+ 50353,
140
+ 50354,
141
+ 50355,
142
+ 50356,
143
+ 50357,
144
+ 50358
145
+ ],
146
+ "suppress_ids": [
147
+ 1,
148
+ 2,
149
+ 7,
150
+ 8,
151
+ 9,
152
+ 10,
153
+ 14,
154
+ 25,
155
+ 26,
156
+ 27,
157
+ 28,
158
+ 29,
159
+ 31,
160
+ 58,
161
+ 59,
162
+ 60,
163
+ 61,
164
+ 62,
165
+ 63,
166
+ 90,
167
+ 91,
168
+ 92,
169
+ 93,
170
+ 359,
171
+ 503,
172
+ 522,
173
+ 542,
174
+ 873,
175
+ 893,
176
+ 902,
177
+ 918,
178
+ 922,
179
+ 931,
180
+ 1350,
181
+ 1853,
182
+ 1982,
183
+ 2460,
184
+ 2627,
185
+ 3246,
186
+ 3253,
187
+ 3268,
188
+ 3536,
189
+ 3846,
190
+ 3961,
191
+ 4183,
192
+ 4667,
193
+ 6585,
194
+ 6647,
195
+ 7273,
196
+ 9061,
197
+ 9383,
198
+ 10428,
199
+ 10929,
200
+ 11938,
201
+ 12033,
202
+ 12331,
203
+ 12562,
204
+ 13793,
205
+ 14157,
206
+ 14635,
207
+ 15265,
208
+ 15618,
209
+ 16553,
210
+ 16604,
211
+ 18362,
212
+ 18956,
213
+ 20075,
214
+ 21675,
215
+ 22520,
216
+ 26130,
217
+ 26161,
218
+ 26435,
219
+ 28279,
220
+ 29464,
221
+ 31650,
222
+ 32302,
223
+ 32470,
224
+ 36865,
225
+ 42863,
226
+ 47425,
227
+ 49870,
228
+ 50254,
229
+ 50258,
230
+ 50359,
231
+ 50360,
232
+ 50361,
233
+ 50362,
234
+ 50363
235
+ ],
236
+ "suppress_ids_begin": [
237
+ 220,
238
+ 50257
239
+ ]
240
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69f74147e3334731bc3a76048724833325d2ec74642fb52620eda87352e3d4f1
3
+ size 3087284237
preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "feature_extractor_type": "WhisperFeatureExtractor",
4
+ "feature_size": 128,
5
+ "hop_length": 160,
6
+ "n_fft": 400,
7
+ "n_samples": 480000,
8
+ "nb_max_frames": 3000,
9
+ "padding_side": "right",
10
+ "padding_value": 0.0,
11
+ "processor_class": "WhisperProcessor",
12
+ "return_attention_mask": false,
13
+ "sampling_rate": 16000
14
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff