guillaumekln commited on
Commit
03feb38
1 Parent(s): 203b8d2

Upload with huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +140 -0
  2. config.json +237 -0
  3. model.bin +3 -0
  4. tokenizer.json +0 -0
  5. vocabulary.txt +0 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - de
6
+ - es
7
+ - ru
8
+ - ko
9
+ - fr
10
+ - ja
11
+ - pt
12
+ - tr
13
+ - pl
14
+ - ca
15
+ - nl
16
+ - ar
17
+ - sv
18
+ - it
19
+ - id
20
+ - hi
21
+ - fi
22
+ - vi
23
+ - he
24
+ - uk
25
+ - el
26
+ - ms
27
+ - cs
28
+ - ro
29
+ - da
30
+ - hu
31
+ - ta
32
+ - 'no'
33
+ - th
34
+ - ur
35
+ - hr
36
+ - bg
37
+ - lt
38
+ - la
39
+ - mi
40
+ - ml
41
+ - cy
42
+ - sk
43
+ - te
44
+ - fa
45
+ - lv
46
+ - bn
47
+ - sr
48
+ - az
49
+ - sl
50
+ - kn
51
+ - et
52
+ - mk
53
+ - br
54
+ - eu
55
+ - is
56
+ - hy
57
+ - ne
58
+ - mn
59
+ - bs
60
+ - kk
61
+ - sq
62
+ - sw
63
+ - gl
64
+ - mr
65
+ - pa
66
+ - si
67
+ - km
68
+ - sn
69
+ - yo
70
+ - so
71
+ - af
72
+ - oc
73
+ - ka
74
+ - be
75
+ - tg
76
+ - sd
77
+ - gu
78
+ - am
79
+ - yi
80
+ - lo
81
+ - uz
82
+ - fo
83
+ - ht
84
+ - ps
85
+ - tk
86
+ - nn
87
+ - mt
88
+ - sa
89
+ - lb
90
+ - my
91
+ - bo
92
+ - tl
93
+ - mg
94
+ - as
95
+ - tt
96
+ - haw
97
+ - ln
98
+ - ha
99
+ - ba
100
+ - jw
101
+ - su
102
+ tags:
103
+ - audio
104
+ - automatic-speech-recognition
105
+ license: mit
106
+ library_name: ctranslate2
107
+ ---
108
+
109
+ # Whisper small model for CTranslate2
110
+
111
+ This repository contains the conversion of [openai/whisper-small](https://huggingface.co/openai/whisper-small) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
112
+
113
+ This model can be used in CTranslate2 or projets based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
114
+
115
+ ## Example
116
+
117
+ ```python
118
+ from faster_whisper import WhisperModel
119
+
120
+ model = WhisperModel("small")
121
+
122
+ segments, info = model.transcribe("audio.mp3")
123
+ for segment in segments:
124
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
125
+ ```
126
+
127
+ ## Conversion details
128
+
129
+ The original model was converted with the following command:
130
+
131
+ ```
132
+ ct2-transformers-converter --model openai/whisper-small --output_dir faster-whisper-small \
133
+ --copy_files tokenizer.json --quantization float16
134
+ ```
135
+
136
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
137
+
138
+ ## More information
139
+
140
+ **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small).**
config.json ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 5,
5
+ 3
6
+ ],
7
+ [
8
+ 5,
9
+ 9
10
+ ],
11
+ [
12
+ 8,
13
+ 0
14
+ ],
15
+ [
16
+ 8,
17
+ 4
18
+ ],
19
+ [
20
+ 8,
21
+ 7
22
+ ],
23
+ [
24
+ 8,
25
+ 8
26
+ ],
27
+ [
28
+ 9,
29
+ 0
30
+ ],
31
+ [
32
+ 9,
33
+ 7
34
+ ],
35
+ [
36
+ 9,
37
+ 9
38
+ ],
39
+ [
40
+ 10,
41
+ 5
42
+ ]
43
+ ],
44
+ "lang_ids": [
45
+ 50259,
46
+ 50260,
47
+ 50261,
48
+ 50262,
49
+ 50263,
50
+ 50264,
51
+ 50265,
52
+ 50266,
53
+ 50267,
54
+ 50268,
55
+ 50269,
56
+ 50270,
57
+ 50271,
58
+ 50272,
59
+ 50273,
60
+ 50274,
61
+ 50275,
62
+ 50276,
63
+ 50277,
64
+ 50278,
65
+ 50279,
66
+ 50280,
67
+ 50281,
68
+ 50282,
69
+ 50283,
70
+ 50284,
71
+ 50285,
72
+ 50286,
73
+ 50287,
74
+ 50288,
75
+ 50289,
76
+ 50290,
77
+ 50291,
78
+ 50292,
79
+ 50293,
80
+ 50294,
81
+ 50295,
82
+ 50296,
83
+ 50297,
84
+ 50298,
85
+ 50299,
86
+ 50300,
87
+ 50301,
88
+ 50302,
89
+ 50303,
90
+ 50304,
91
+ 50305,
92
+ 50306,
93
+ 50307,
94
+ 50308,
95
+ 50309,
96
+ 50310,
97
+ 50311,
98
+ 50312,
99
+ 50313,
100
+ 50314,
101
+ 50315,
102
+ 50316,
103
+ 50317,
104
+ 50318,
105
+ 50319,
106
+ 50320,
107
+ 50321,
108
+ 50322,
109
+ 50323,
110
+ 50324,
111
+ 50325,
112
+ 50326,
113
+ 50327,
114
+ 50328,
115
+ 50329,
116
+ 50330,
117
+ 50331,
118
+ 50332,
119
+ 50333,
120
+ 50334,
121
+ 50335,
122
+ 50336,
123
+ 50337,
124
+ 50338,
125
+ 50339,
126
+ 50340,
127
+ 50341,
128
+ 50342,
129
+ 50343,
130
+ 50344,
131
+ 50345,
132
+ 50346,
133
+ 50347,
134
+ 50348,
135
+ 50349,
136
+ 50350,
137
+ 50351,
138
+ 50352,
139
+ 50353,
140
+ 50354,
141
+ 50355,
142
+ 50356,
143
+ 50357
144
+ ],
145
+ "suppress_ids": [
146
+ 1,
147
+ 2,
148
+ 7,
149
+ 8,
150
+ 9,
151
+ 10,
152
+ 14,
153
+ 25,
154
+ 26,
155
+ 27,
156
+ 28,
157
+ 29,
158
+ 31,
159
+ 58,
160
+ 59,
161
+ 60,
162
+ 61,
163
+ 62,
164
+ 63,
165
+ 90,
166
+ 91,
167
+ 92,
168
+ 93,
169
+ 359,
170
+ 503,
171
+ 522,
172
+ 542,
173
+ 873,
174
+ 893,
175
+ 902,
176
+ 918,
177
+ 922,
178
+ 931,
179
+ 1350,
180
+ 1853,
181
+ 1982,
182
+ 2460,
183
+ 2627,
184
+ 3246,
185
+ 3253,
186
+ 3268,
187
+ 3536,
188
+ 3846,
189
+ 3961,
190
+ 4183,
191
+ 4667,
192
+ 6585,
193
+ 6647,
194
+ 7273,
195
+ 9061,
196
+ 9383,
197
+ 10428,
198
+ 10929,
199
+ 11938,
200
+ 12033,
201
+ 12331,
202
+ 12562,
203
+ 13793,
204
+ 14157,
205
+ 14635,
206
+ 15265,
207
+ 15618,
208
+ 16553,
209
+ 16604,
210
+ 18362,
211
+ 18956,
212
+ 20075,
213
+ 21675,
214
+ 22520,
215
+ 26130,
216
+ 26161,
217
+ 26435,
218
+ 28279,
219
+ 29464,
220
+ 31650,
221
+ 32302,
222
+ 32470,
223
+ 36865,
224
+ 42863,
225
+ 47425,
226
+ 49870,
227
+ 50254,
228
+ 50258,
229
+ 50360,
230
+ 50361,
231
+ 50362
232
+ ],
233
+ "suppress_ids_begin": [
234
+ 220,
235
+ 50257
236
+ ]
237
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e305921506d8872816023e4c273e75d2419fb89b24da97b4fe7bce14170d671
3
+ size 483546902
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff