exdysa TimeRobber commited on
Commit
b6341ee
0 Parent(s):

Duplicate from bigscience/mt0-large

Browse files

Co-authored-by: Thomas Wang <TimeRobber@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
25
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
27
+ *.tflite filter=lfs diff=lfs merge=lfs -text
28
+ *.tgz filter=lfs diff=lfs merge=lfs -text
29
+ *.wasm filter=lfs diff=lfs merge=lfs -text
30
+ *.xz filter=lfs diff=lfs merge=lfs -text
31
+ *.zip filter=lfs diff=lfs merge=lfs -text
32
+ *.zst filter=lfs diff=lfs merge=lfs -text
33
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
34
+ onnx/tokenizer.json filter=lfs diff=lfs merge=lfs -text
35
+ onnx/decoder_model_merged.onnx_data filter=lfs diff=lfs merge=lfs -text
36
+ onnx/encoder_model.onnx_data filter=lfs diff=lfs merge=lfs -text
37
+ onnx/decoder_with_past_model.onnx_data filter=lfs diff=lfs merge=lfs -text
38
+ onnx/decoder_model.onnx_data filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,913 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bigscience/xP3
4
+ - mc4
5
+ license: apache-2.0
6
+ language:
7
+ - af
8
+ - am
9
+ - ar
10
+ - az
11
+ - be
12
+ - bg
13
+ - bn
14
+ - ca
15
+ - ceb
16
+ - co
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - eo
24
+ - es
25
+ - et
26
+ - eu
27
+ - fa
28
+ - fi
29
+ - fil
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - gu
36
+ - ha
37
+ - haw
38
+ - hi
39
+ - hmn
40
+ - ht
41
+ - hu
42
+ - hy
43
+ - ig
44
+ - is
45
+ - it
46
+ - iw
47
+ - ja
48
+ - jv
49
+ - ka
50
+ - kk
51
+ - km
52
+ - kn
53
+ - ko
54
+ - ku
55
+ - ky
56
+ - la
57
+ - lb
58
+ - lo
59
+ - lt
60
+ - lv
61
+ - mg
62
+ - mi
63
+ - mk
64
+ - ml
65
+ - mn
66
+ - mr
67
+ - ms
68
+ - mt
69
+ - my
70
+ - ne
71
+ - nl
72
+ - 'no'
73
+ - ny
74
+ - pa
75
+ - pl
76
+ - ps
77
+ - pt
78
+ - ro
79
+ - ru
80
+ - sd
81
+ - si
82
+ - sk
83
+ - sl
84
+ - sm
85
+ - sn
86
+ - so
87
+ - sq
88
+ - sr
89
+ - st
90
+ - su
91
+ - sv
92
+ - sw
93
+ - ta
94
+ - te
95
+ - tg
96
+ - th
97
+ - tr
98
+ - uk
99
+ - und
100
+ - ur
101
+ - uz
102
+ - vi
103
+ - xh
104
+ - yi
105
+ - yo
106
+ - zh
107
+ - zu
108
+ pipeline_tag: text2text-generation
109
+ widget:
110
+ - text: >-
111
+ 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
112
+ review as positive, neutral or negative?
113
+ example_title: zh-en sentiment
114
+ - text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
115
+ example_title: zh-zh sentiment
116
+ - text: Suggest at least five related search terms to "Mạng neural nhân tạo".
117
+ example_title: vi-en query
118
+ - text: >-
119
+ Proposez au moins cinq mots clés concernant «Réseau de neurones
120
+ artificiels».
121
+ example_title: fr-fr query
122
+ - text: Explain in a sentence in Telugu what is backpropagation in neural networks.
123
+ example_title: te-en qa
124
+ - text: Why is the sky blue?
125
+ example_title: en-en qa
126
+ - text: >-
127
+ Write a fairy tale about a troll saving a princess from a dangerous dragon.
128
+ The fairy tale is a masterpiece that has achieved praise worldwide and its
129
+ moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
130
+ example_title: es-en fable
131
+ - text: >-
132
+ Write a fable about wood elves living in a forest that is suddenly invaded
133
+ by ogres. The fable is a masterpiece that has achieved praise worldwide and
134
+ its moral is "Violence is the last refuge of the incompetent". Fable (in
135
+ Hindi):
136
+ example_title: hi-en fable
137
+ model-index:
138
+ - name: mt0-large
139
+ results:
140
+ - task:
141
+ type: Coreference resolution
142
+ dataset:
143
+ type: winogrande
144
+ name: Winogrande XL (xl)
145
+ config: xl
146
+ split: validation
147
+ revision: a80f460359d1e9a67c006011c94de42a8759430c
148
+ metrics:
149
+ - type: Accuracy
150
+ value: 51.78
151
+ - task:
152
+ type: Coreference resolution
153
+ dataset:
154
+ type: Muennighoff/xwinograd
155
+ name: XWinograd (en)
156
+ config: en
157
+ split: test
158
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
159
+ metrics:
160
+ - type: Accuracy
161
+ value: 54.8
162
+ - task:
163
+ type: Coreference resolution
164
+ dataset:
165
+ type: Muennighoff/xwinograd
166
+ name: XWinograd (fr)
167
+ config: fr
168
+ split: test
169
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
170
+ metrics:
171
+ - type: Accuracy
172
+ value: 56.63
173
+ - task:
174
+ type: Coreference resolution
175
+ dataset:
176
+ type: Muennighoff/xwinograd
177
+ name: XWinograd (jp)
178
+ config: jp
179
+ split: test
180
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
181
+ metrics:
182
+ - type: Accuracy
183
+ value: 53.08
184
+ - task:
185
+ type: Coreference resolution
186
+ dataset:
187
+ type: Muennighoff/xwinograd
188
+ name: XWinograd (pt)
189
+ config: pt
190
+ split: test
191
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
192
+ metrics:
193
+ - type: Accuracy
194
+ value: 56.27
195
+ - task:
196
+ type: Coreference resolution
197
+ dataset:
198
+ type: Muennighoff/xwinograd
199
+ name: XWinograd (ru)
200
+ config: ru
201
+ split: test
202
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
203
+ metrics:
204
+ - type: Accuracy
205
+ value: 55.56
206
+ - task:
207
+ type: Coreference resolution
208
+ dataset:
209
+ type: Muennighoff/xwinograd
210
+ name: XWinograd (zh)
211
+ config: zh
212
+ split: test
213
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
214
+ metrics:
215
+ - type: Accuracy
216
+ value: 54.37
217
+ - task:
218
+ type: Natural language inference
219
+ dataset:
220
+ type: anli
221
+ name: ANLI (r1)
222
+ config: r1
223
+ split: validation
224
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
225
+ metrics:
226
+ - type: Accuracy
227
+ value: 33.3
228
+ - task:
229
+ type: Natural language inference
230
+ dataset:
231
+ type: anli
232
+ name: ANLI (r2)
233
+ config: r2
234
+ split: validation
235
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
236
+ metrics:
237
+ - type: Accuracy
238
+ value: 34.7
239
+ - task:
240
+ type: Natural language inference
241
+ dataset:
242
+ type: anli
243
+ name: ANLI (r3)
244
+ config: r3
245
+ split: validation
246
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
247
+ metrics:
248
+ - type: Accuracy
249
+ value: 34.75
250
+ - task:
251
+ type: Natural language inference
252
+ dataset:
253
+ type: super_glue
254
+ name: SuperGLUE (cb)
255
+ config: cb
256
+ split: validation
257
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
258
+ metrics:
259
+ - type: Accuracy
260
+ value: 51.79
261
+ - task:
262
+ type: Natural language inference
263
+ dataset:
264
+ type: super_glue
265
+ name: SuperGLUE (rte)
266
+ config: rte
267
+ split: validation
268
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
269
+ metrics:
270
+ - type: Accuracy
271
+ value: 64.26
272
+ - task:
273
+ type: Natural language inference
274
+ dataset:
275
+ type: xnli
276
+ name: XNLI (ar)
277
+ config: ar
278
+ split: validation
279
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
280
+ metrics:
281
+ - type: Accuracy
282
+ value: 42.61
283
+ - task:
284
+ type: Natural language inference
285
+ dataset:
286
+ type: xnli
287
+ name: XNLI (bg)
288
+ config: bg
289
+ split: validation
290
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
291
+ metrics:
292
+ - type: Accuracy
293
+ value: 43.94
294
+ - task:
295
+ type: Natural language inference
296
+ dataset:
297
+ type: xnli
298
+ name: XNLI (de)
299
+ config: de
300
+ split: validation
301
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
302
+ metrics:
303
+ - type: Accuracy
304
+ value: 44.18
305
+ - task:
306
+ type: Natural language inference
307
+ dataset:
308
+ type: xnli
309
+ name: XNLI (el)
310
+ config: el
311
+ split: validation
312
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
313
+ metrics:
314
+ - type: Accuracy
315
+ value: 43.94
316
+ - task:
317
+ type: Natural language inference
318
+ dataset:
319
+ type: xnli
320
+ name: XNLI (en)
321
+ config: en
322
+ split: validation
323
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
324
+ metrics:
325
+ - type: Accuracy
326
+ value: 44.26
327
+ - task:
328
+ type: Natural language inference
329
+ dataset:
330
+ type: xnli
331
+ name: XNLI (es)
332
+ config: es
333
+ split: validation
334
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
335
+ metrics:
336
+ - type: Accuracy
337
+ value: 45.34
338
+ - task:
339
+ type: Natural language inference
340
+ dataset:
341
+ type: xnli
342
+ name: XNLI (fr)
343
+ config: fr
344
+ split: validation
345
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
346
+ metrics:
347
+ - type: Accuracy
348
+ value: 42.01
349
+ - task:
350
+ type: Natural language inference
351
+ dataset:
352
+ type: xnli
353
+ name: XNLI (hi)
354
+ config: hi
355
+ split: validation
356
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
357
+ metrics:
358
+ - type: Accuracy
359
+ value: 41.89
360
+ - task:
361
+ type: Natural language inference
362
+ dataset:
363
+ type: xnli
364
+ name: XNLI (ru)
365
+ config: ru
366
+ split: validation
367
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
368
+ metrics:
369
+ - type: Accuracy
370
+ value: 42.13
371
+ - task:
372
+ type: Natural language inference
373
+ dataset:
374
+ type: xnli
375
+ name: XNLI (sw)
376
+ config: sw
377
+ split: validation
378
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
379
+ metrics:
380
+ - type: Accuracy
381
+ value: 40.08
382
+ - task:
383
+ type: Natural language inference
384
+ dataset:
385
+ type: xnli
386
+ name: XNLI (th)
387
+ config: th
388
+ split: validation
389
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
390
+ metrics:
391
+ - type: Accuracy
392
+ value: 40.8
393
+ - task:
394
+ type: Natural language inference
395
+ dataset:
396
+ type: xnli
397
+ name: XNLI (tr)
398
+ config: tr
399
+ split: validation
400
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
401
+ metrics:
402
+ - type: Accuracy
403
+ value: 41.29
404
+ - task:
405
+ type: Natural language inference
406
+ dataset:
407
+ type: xnli
408
+ name: XNLI (ur)
409
+ config: ur
410
+ split: validation
411
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
412
+ metrics:
413
+ - type: Accuracy
414
+ value: 39.88
415
+ - task:
416
+ type: Natural language inference
417
+ dataset:
418
+ type: xnli
419
+ name: XNLI (vi)
420
+ config: vi
421
+ split: validation
422
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
423
+ metrics:
424
+ - type: Accuracy
425
+ value: 41.81
426
+ - task:
427
+ type: Natural language inference
428
+ dataset:
429
+ type: xnli
430
+ name: XNLI (zh)
431
+ config: zh
432
+ split: validation
433
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
434
+ metrics:
435
+ - type: Accuracy
436
+ value: 40.84
437
+ - task:
438
+ type: Sentence completion
439
+ dataset:
440
+ type: story_cloze
441
+ name: StoryCloze (2016)
442
+ config: '2016'
443
+ split: validation
444
+ revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
445
+ metrics:
446
+ - type: Accuracy
447
+ value: 59.49
448
+ - task:
449
+ type: Sentence completion
450
+ dataset:
451
+ type: super_glue
452
+ name: SuperGLUE (copa)
453
+ config: copa
454
+ split: validation
455
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
456
+ metrics:
457
+ - type: Accuracy
458
+ value: 65
459
+ - task:
460
+ type: Sentence completion
461
+ dataset:
462
+ type: xcopa
463
+ name: XCOPA (et)
464
+ config: et
465
+ split: validation
466
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
467
+ metrics:
468
+ - type: Accuracy
469
+ value: 56
470
+ - task:
471
+ type: Sentence completion
472
+ dataset:
473
+ type: xcopa
474
+ name: XCOPA (ht)
475
+ config: ht
476
+ split: validation
477
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
478
+ metrics:
479
+ - type: Accuracy
480
+ value: 62
481
+ - task:
482
+ type: Sentence completion
483
+ dataset:
484
+ type: xcopa
485
+ name: XCOPA (id)
486
+ config: id
487
+ split: validation
488
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
489
+ metrics:
490
+ - type: Accuracy
491
+ value: 61
492
+ - task:
493
+ type: Sentence completion
494
+ dataset:
495
+ type: xcopa
496
+ name: XCOPA (it)
497
+ config: it
498
+ split: validation
499
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
500
+ metrics:
501
+ - type: Accuracy
502
+ value: 63
503
+ - task:
504
+ type: Sentence completion
505
+ dataset:
506
+ type: xcopa
507
+ name: XCOPA (qu)
508
+ config: qu
509
+ split: validation
510
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
511
+ metrics:
512
+ - type: Accuracy
513
+ value: 57
514
+ - task:
515
+ type: Sentence completion
516
+ dataset:
517
+ type: xcopa
518
+ name: XCOPA (sw)
519
+ config: sw
520
+ split: validation
521
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
522
+ metrics:
523
+ - type: Accuracy
524
+ value: 54
525
+ - task:
526
+ type: Sentence completion
527
+ dataset:
528
+ type: xcopa
529
+ name: XCOPA (ta)
530
+ config: ta
531
+ split: validation
532
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
533
+ metrics:
534
+ - type: Accuracy
535
+ value: 62
536
+ - task:
537
+ type: Sentence completion
538
+ dataset:
539
+ type: xcopa
540
+ name: XCOPA (th)
541
+ config: th
542
+ split: validation
543
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
544
+ metrics:
545
+ - type: Accuracy
546
+ value: 57
547
+ - task:
548
+ type: Sentence completion
549
+ dataset:
550
+ type: xcopa
551
+ name: XCOPA (tr)
552
+ config: tr
553
+ split: validation
554
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
555
+ metrics:
556
+ - type: Accuracy
557
+ value: 57
558
+ - task:
559
+ type: Sentence completion
560
+ dataset:
561
+ type: xcopa
562
+ name: XCOPA (vi)
563
+ config: vi
564
+ split: validation
565
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
566
+ metrics:
567
+ - type: Accuracy
568
+ value: 63
569
+ - task:
570
+ type: Sentence completion
571
+ dataset:
572
+ type: xcopa
573
+ name: XCOPA (zh)
574
+ config: zh
575
+ split: validation
576
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
577
+ metrics:
578
+ - type: Accuracy
579
+ value: 58
580
+ - task:
581
+ type: Sentence completion
582
+ dataset:
583
+ type: Muennighoff/xstory_cloze
584
+ name: XStoryCloze (ar)
585
+ config: ar
586
+ split: validation
587
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
588
+ metrics:
589
+ - type: Accuracy
590
+ value: 56.59
591
+ - task:
592
+ type: Sentence completion
593
+ dataset:
594
+ type: Muennighoff/xstory_cloze
595
+ name: XStoryCloze (es)
596
+ config: es
597
+ split: validation
598
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
599
+ metrics:
600
+ - type: Accuracy
601
+ value: 55.72
602
+ - task:
603
+ type: Sentence completion
604
+ dataset:
605
+ type: Muennighoff/xstory_cloze
606
+ name: XStoryCloze (eu)
607
+ config: eu
608
+ split: validation
609
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
610
+ metrics:
611
+ - type: Accuracy
612
+ value: 52.61
613
+ - task:
614
+ type: Sentence completion
615
+ dataset:
616
+ type: Muennighoff/xstory_cloze
617
+ name: XStoryCloze (hi)
618
+ config: hi
619
+ split: validation
620
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
621
+ metrics:
622
+ - type: Accuracy
623
+ value: 52.15
624
+ - task:
625
+ type: Sentence completion
626
+ dataset:
627
+ type: Muennighoff/xstory_cloze
628
+ name: XStoryCloze (id)
629
+ config: id
630
+ split: validation
631
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
632
+ metrics:
633
+ - type: Accuracy
634
+ value: 54.67
635
+ - task:
636
+ type: Sentence completion
637
+ dataset:
638
+ type: Muennighoff/xstory_cloze
639
+ name: XStoryCloze (my)
640
+ config: my
641
+ split: validation
642
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
643
+ metrics:
644
+ - type: Accuracy
645
+ value: 51.69
646
+ - task:
647
+ type: Sentence completion
648
+ dataset:
649
+ type: Muennighoff/xstory_cloze
650
+ name: XStoryCloze (ru)
651
+ config: ru
652
+ split: validation
653
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
654
+ metrics:
655
+ - type: Accuracy
656
+ value: 53.74
657
+ - task:
658
+ type: Sentence completion
659
+ dataset:
660
+ type: Muennighoff/xstory_cloze
661
+ name: XStoryCloze (sw)
662
+ config: sw
663
+ split: validation
664
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
665
+ metrics:
666
+ - type: Accuracy
667
+ value: 55.53
668
+ - task:
669
+ type: Sentence completion
670
+ dataset:
671
+ type: Muennighoff/xstory_cloze
672
+ name: XStoryCloze (te)
673
+ config: te
674
+ split: validation
675
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
676
+ metrics:
677
+ - type: Accuracy
678
+ value: 57.18
679
+ - task:
680
+ type: Sentence completion
681
+ dataset:
682
+ type: Muennighoff/xstory_cloze
683
+ name: XStoryCloze (zh)
684
+ config: zh
685
+ split: validation
686
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
687
+ metrics:
688
+ - type: Accuracy
689
+ value: 59.5
690
+ ---
691
+
692
+ ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true)
693
+
694
+ # Table of Contents
695
+
696
+ 1. [Model Summary](#model-summary)
697
+ 2. [Use](#use)
698
+ 3. [Limitations](#limitations)
699
+ 4. [Training](#training)
700
+ 5. [Evaluation](#evaluation)
701
+ 7. [Citation](#citation)
702
+
703
+ # Model Summary
704
+
705
+ > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
706
+
707
+ - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
708
+ - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
709
+ - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
710
+ - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
711
+ - **BLOOMZ & mT0 Model Family:**
712
+
713
+ <div class="max-w-full overflow-auto">
714
+ <table>
715
+ <tr>
716
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
717
+ </tr>
718
+ <tr>
719
+ <td>Parameters</td>
720
+ <td>300M</td>
721
+ <td>580M</td>
722
+ <td>1.2B</td>
723
+ <td>3.7B</td>
724
+ <td>13B</td>
725
+ <td>560M</td>
726
+ <td>1.1B</td>
727
+ <td>1.7B</td>
728
+ <td>3B</td>
729
+ <td>7.1B</td>
730
+ <td>176B</td>
731
+ </tr>
732
+ <tr>
733
+ <td>Finetuned Model</td>
734
+ <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
735
+ <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
736
+ <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
737
+ <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
738
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
739
+ <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
740
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
741
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
742
+ <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
743
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
744
+ <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
745
+ </tr>
746
+ </tr>
747
+ <tr>
748
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
749
+ </tr>
750
+ <tr>
751
+ <td>Finetuned Model</td>
752
+ <td></td>
753
+ <td></td>
754
+ <td></td>
755
+ <td></td>
756
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
757
+ <td></td>
758
+ <td></td>
759
+ <td></td>
760
+ <td></td>
761
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
762
+ <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
763
+ </tr>
764
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
765
+ </tr>
766
+ <tr>
767
+ <td>Finetuned Model</td>
768
+ <td></td>
769
+ <td></td>
770
+ <td></td>
771
+ <td></td>
772
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
773
+ <td></td>
774
+ <td></td>
775
+ <td></td>
776
+ <td></td>
777
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
778
+ <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
779
+ </tr>
780
+ <th colspan="12">Original pretrained checkpoints. Not recommended.</th>
781
+ <tr>
782
+ <td>Pretrained Model</td>
783
+ <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
784
+ <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
785
+ <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
786
+ <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
787
+ <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
788
+ <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
789
+ <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
790
+ <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
791
+ <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
792
+ <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
793
+ <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
794
+ </tr>
795
+ </table>
796
+ </div>
797
+
798
+
799
+ # Use
800
+
801
+ ## Intended use
802
+
803
+ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
804
+ - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
805
+ - Suggest at least five related search terms to "Mạng neural nhân tạo".
806
+ - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
807
+ - Explain in a sentence in Telugu what is backpropagation in neural networks.
808
+
809
+ **Feel free to share your generations in the Community tab!**
810
+
811
+ ## How to use
812
+
813
+ ### CPU
814
+
815
+ <details>
816
+ <summary> Click to expand </summary>
817
+
818
+ ```python
819
+ # pip install -q transformers
820
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
821
+
822
+ checkpoint = "bigscience/mt0-large"
823
+
824
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
825
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
826
+
827
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
828
+ outputs = model.generate(inputs)
829
+ print(tokenizer.decode(outputs[0]))
830
+ ```
831
+
832
+ </details>
833
+
834
+ ### GPU
835
+
836
+ <details>
837
+ <summary> Click to expand </summary>
838
+
839
+ ```python
840
+ # pip install -q transformers accelerate
841
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
842
+
843
+ checkpoint = "bigscience/mt0-large"
844
+
845
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
846
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
847
+
848
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
849
+ outputs = model.generate(inputs)
850
+ print(tokenizer.decode(outputs[0]))
851
+ ```
852
+
853
+ </details>
854
+
855
+ ### GPU in 8bit
856
+
857
+ <details>
858
+ <summary> Click to expand </summary>
859
+
860
+ ```python
861
+ # pip install -q transformers accelerate bitsandbytes
862
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
863
+
864
+ checkpoint = "bigscience/mt0-large"
865
+
866
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
867
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
868
+
869
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
870
+ outputs = model.generate(inputs)
871
+ print(tokenizer.decode(outputs[0]))
872
+ ```
873
+
874
+ </details>
875
+
876
+ <!-- Necessary for whitespace -->
877
+ ###
878
+
879
+ # Limitations
880
+
881
+ **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
882
+
883
+ # Training
884
+
885
+ ## Model
886
+
887
+ - **Architecture:** Same as [mt5-large](https://huggingface.co/google/mt5-large), also refer to the `config.json` file
888
+ - **Finetuning steps:** 25000
889
+ - **Finetuning tokens:** 4.62 billion
890
+ - **Precision:** bfloat16
891
+
892
+ ## Hardware
893
+
894
+ - **TPUs:** TPUv4-64
895
+
896
+ ## Software
897
+
898
+ - **Orchestration:** [T5X](https://github.com/google-research/t5x)
899
+ - **Neural networks:** [Jax](https://github.com/google/jax)
900
+
901
+ # Evaluation
902
+
903
+ We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
904
+
905
+ # Citation
906
+ ```bibtex
907
+ @article{muennighoff2022crosslingual,
908
+ title={Crosslingual generalization through multitask finetuning},
909
+ author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
910
+ journal={arXiv preprint arXiv:2211.01786},
911
+ year={2022}
912
+ }
913
+ ```
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/mt5-large",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 2816,
7
+ "d_kv": 64,
8
+ "d_model": 1024,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "mt5",
19
+ "num_decoder_layers": 24,
20
+ "num_heads": 16,
21
+ "num_layers": 24,
22
+ "output_past": true,
23
+ "pad_token_id": 0,
24
+ "relative_attention_max_distance": 128,
25
+ "relative_attention_num_buckets": 32,
26
+ "tie_word_embeddings": false,
27
+ "tokenizer_class": "T5Tokenizer",
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.23.1",
30
+ "use_cache": true,
31
+ "vocab_size": 250112
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f9e1961a362116b17c9e33a458394baed48f9147de767952086d5c4c662c018
3
+ size 4918393736
onnx/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bigscience/mt0-large",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 2816,
7
+ "d_kv": 64,
8
+ "d_model": 1024,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "mt5",
19
+ "num_decoder_layers": 24,
20
+ "num_heads": 16,
21
+ "num_layers": 24,
22
+ "output_past": true,
23
+ "pad_token_id": 0,
24
+ "relative_attention_max_distance": 128,
25
+ "relative_attention_num_buckets": 32,
26
+ "tie_word_embeddings": false,
27
+ "tokenizer_class": "T5Tokenizer",
28
+ "transformers_version": "4.30.2",
29
+ "use_cache": true,
30
+ "vocab_size": 250112
31
+ }
onnx/decoder_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c3152f77130ea7d54bce926c2df72978b00103d59e850959218fabc9f2025dc
3
+ size 784246
onnx/decoder_model.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:738a9eca0af884b66ed05a9a2a62ded1654c7d3ac12b0cb46783014c2c4204fa
3
+ size 3684997120
onnx/decoder_model_merged.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8974d28de4f717f5c9a5f0c95f06fbfffedbdbe812a891e700eb37c21c30f9ed
3
+ size 1488596
onnx/decoder_model_merged.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:738a9eca0af884b66ed05a9a2a62ded1654c7d3ac12b0cb46783014c2c4204fa
3
+ size 3684997120
onnx/decoder_with_past_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acef32bbf16f453edf2d8cfff50fb305b27cbe22b266e832fa9c34d5dbddaf71
3
+ size 709918
onnx/decoder_with_past_model.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e0037da14f8960666afd555f883174a28f74aea707cb283bceae09f3fd25352
3
+ size 3483670528
onnx/encoder_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26057d54691ca70844f85458f4b171f316a2371c8009d9573fee344a105c7dd2
3
+ size 409187
onnx/encoder_model.onnx_data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce085f96c3c98e0c2759f61b7a8e158fb931fca5a1ee2a86b660315f7e7ceb45
3
+ size 2257786880
onnx/generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.30.2"
7
+ }
onnx/special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "eos_token": "</s>",
3
+ "pad_token": "<pad>",
4
+ "unk_token": "<unk>"
5
+ }
onnx/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef78f86560d809067d12bac6c09f19a462cb3af3f54d2b8acbba26e1433125d6
3
+ size 4309802
onnx/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99cc999819aaabf74898a252863b10d86fbcd86e8b3f65c118ff334ff85c5ea5
3
+ size 16315121
onnx/tokenizer_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": null,
3
+ "clean_up_tokenization_spaces": true,
4
+ "eos_token": "</s>",
5
+ "extra_ids": 0,
6
+ "model_max_length": 1000000000000000019884624838656,
7
+ "pad_token": "<pad>",
8
+ "sp_model_kwargs": {},
9
+ "tokenizer_class": "T5Tokenizer",
10
+ "unk_token": "<unk>"
11
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef384b5e1e6a0212e833ec12cc1ca3a3a2693777715a825dc369935fe37361af
3
+ size 4918507577
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "eos_token": "</s>",
3
+ "pad_token": "<pad>",
4
+ "unk_token": "<unk>"
5
+ }
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef78f86560d809067d12bac6c09f19a462cb3af3f54d2b8acbba26e1433125d6
3
+ size 4309802
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93c3578052e1605d8332eb961bc08d72e246071974e4cc54aa6991826b802aa5
3
+ size 16330369
tokenizer_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": null,
3
+ "eos_token": "</s>",
4
+ "extra_ids": 0,
5
+ "name_or_path": "google/mt5-large",
6
+ "pad_token": "<pad>",
7
+ "sp_model_kwargs": {},
8
+ "special_tokens_map_file": "/home/patrick/.cache/torch/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",
9
+ "tokenizer_class": "T5Tokenizer",
10
+ "unk_token": "<unk>"
11
+ }