Muennighoff commited on
Commit
feea0e6
1 Parent(s): 4fad800

Add results

Browse files
Files changed (1) hide show
  1. README.md +917 -0
README.md ADDED
@@ -0,0 +1,917 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bigscience/xP3
4
+ - mc4
5
+ license: apache-2.0
6
+ language:
7
+ - af
8
+ - am
9
+ - ar
10
+ - az
11
+ - be
12
+ - bg
13
+ - bn
14
+ - ca
15
+ - ceb
16
+ - co
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - eo
24
+ - es
25
+ - et
26
+ - eu
27
+ - fa
28
+ - fi
29
+ - fil
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - gu
36
+ - ha
37
+ - haw
38
+ - hi
39
+ - hmn
40
+ - ht
41
+ - hu
42
+ - hy
43
+ - ig
44
+ - is
45
+ - it
46
+ - iw
47
+ - ja
48
+ - jv
49
+ - ka
50
+ - kk
51
+ - km
52
+ - kn
53
+ - ko
54
+ - ku
55
+ - ky
56
+ - la
57
+ - lb
58
+ - lo
59
+ - lt
60
+ - lv
61
+ - mg
62
+ - mi
63
+ - mk
64
+ - ml
65
+ - mn
66
+ - mr
67
+ - ms
68
+ - mt
69
+ - my
70
+ - ne
71
+ - nl
72
+ - no
73
+ - ny
74
+ - pa
75
+ - pl
76
+ - ps
77
+ - pt
78
+ - ro
79
+ - ru
80
+ - sd
81
+ - si
82
+ - sk
83
+ - sl
84
+ - sm
85
+ - sn
86
+ - so
87
+ - sq
88
+ - sr
89
+ - st
90
+ - su
91
+ - sv
92
+ - sw
93
+ - ta
94
+ - te
95
+ - tg
96
+ - th
97
+ - tr
98
+ - uk
99
+ - und
100
+ - ur
101
+ - uz
102
+ - vi
103
+ - xh
104
+ - yi
105
+ - yo
106
+ - zh
107
+ - zu
108
+ pipeline_tag: text-generation
109
+ widget:
110
+ - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?"
111
+ example_title: "zh-en sentiment"
112
+ - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?"
113
+ example_title: "zh-zh sentiment"
114
+ - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"."
115
+ example_title: "vi-en query"
116
+ - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»."
117
+ example_title: "fr-fr query"
118
+ - text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
119
+ example_title: "te-en qa"
120
+ - text: "Why is the sky blue?"
121
+ example_title: "en-en qa"
122
+ - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
123
+ example_title: "es-en fable"
124
+ - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
125
+ example_title: "hi-en fable"
126
+ model-index:
127
+ - name: mt0-xl
128
+ results:
129
+ - task:
130
+ type: Coreference resolution
131
+ dataset:
132
+ type: winogrande
133
+ name: Winogrande XL (xl)
134
+ config: xl
135
+ split: validation
136
+ revision: a80f460359d1e9a67c006011c94de42a8759430c
137
+ metrics:
138
+ - type: Accuracy
139
+ value: 52.49
140
+ - task:
141
+ type: Coreference resolution
142
+ dataset:
143
+ type: Muennighoff/xwinograd
144
+ name: XWinograd (en)
145
+ config: en
146
+ split: test
147
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
148
+ metrics:
149
+ - type: Accuracy
150
+ value: 61.89
151
+ - task:
152
+ type: Coreference resolution
153
+ dataset:
154
+ type: Muennighoff/xwinograd
155
+ name: XWinograd (fr)
156
+ config: fr
157
+ split: test
158
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
159
+ metrics:
160
+ - type: Accuracy
161
+ value: 59.04
162
+ - task:
163
+ type: Coreference resolution
164
+ dataset:
165
+ type: Muennighoff/xwinograd
166
+ name: XWinograd (jp)
167
+ config: jp
168
+ split: test
169
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
170
+ metrics:
171
+ - type: Accuracy
172
+ value: 60.27
173
+ - task:
174
+ type: Coreference resolution
175
+ dataset:
176
+ type: Muennighoff/xwinograd
177
+ name: XWinograd (pt)
178
+ config: pt
179
+ split: test
180
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
181
+ metrics:
182
+ - type: Accuracy
183
+ value: 66.16
184
+ - task:
185
+ type: Coreference resolution
186
+ dataset:
187
+ type: Muennighoff/xwinograd
188
+ name: XWinograd (ru)
189
+ config: ru
190
+ split: test
191
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
192
+ metrics:
193
+ - type: Accuracy
194
+ value: 59.05
195
+ - task:
196
+ type: Coreference resolution
197
+ dataset:
198
+ type: Muennighoff/xwinograd
199
+ name: XWinograd (zh)
200
+ config: zh
201
+ split: test
202
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
203
+ metrics:
204
+ - type: Accuracy
205
+ value: 62.9
206
+ - task:
207
+ type: Natural language inference
208
+ dataset:
209
+ type: anli
210
+ name: ANLI (r1)
211
+ config: r1
212
+ split: validation
213
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
214
+ metrics:
215
+ - type: Accuracy
216
+ value: 38.2
217
+ - task:
218
+ type: Natural language inference
219
+ dataset:
220
+ type: anli
221
+ name: ANLI (r2)
222
+ config: r2
223
+ split: validation
224
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
225
+ metrics:
226
+ - type: Accuracy
227
+ value: 34.8
228
+ - task:
229
+ type: Natural language inference
230
+ dataset:
231
+ type: anli
232
+ name: ANLI (r3)
233
+ config: r3
234
+ split: validation
235
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
236
+ metrics:
237
+ - type: Accuracy
238
+ value: 39.0
239
+ - task:
240
+ type: Natural language inference
241
+ dataset:
242
+ type: super_glue
243
+ name: SuperGLUE (cb)
244
+ config: cb
245
+ split: validation
246
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
247
+ metrics:
248
+ - type: Accuracy
249
+ value: 85.71
250
+ - task:
251
+ type: Natural language inference
252
+ dataset:
253
+ type: super_glue
254
+ name: SuperGLUE (rte)
255
+ config: rte
256
+ split: validation
257
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
258
+ metrics:
259
+ - type: Accuracy
260
+ value: 78.7
261
+ - task:
262
+ type: Natural language inference
263
+ dataset:
264
+ type: xnli
265
+ name: XNLI (ar)
266
+ config: ar
267
+ split: validation
268
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
269
+ metrics:
270
+ - type: Accuracy
271
+ value: 51.85
272
+ - task:
273
+ type: Natural language inference
274
+ dataset:
275
+ type: xnli
276
+ name: XNLI (bg)
277
+ config: bg
278
+ split: validation
279
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
280
+ metrics:
281
+ - type: Accuracy
282
+ value: 54.18
283
+ - task:
284
+ type: Natural language inference
285
+ dataset:
286
+ type: xnli
287
+ name: XNLI (de)
288
+ config: de
289
+ split: validation
290
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
291
+ metrics:
292
+ - type: Accuracy
293
+ value: 54.78
294
+ - task:
295
+ type: Natural language inference
296
+ dataset:
297
+ type: xnli
298
+ name: XNLI (el)
299
+ config: el
300
+ split: validation
301
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
302
+ metrics:
303
+ - type: Accuracy
304
+ value: 53.78
305
+ - task:
306
+ type: Natural language inference
307
+ dataset:
308
+ type: xnli
309
+ name: XNLI (en)
310
+ config: en
311
+ split: validation
312
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
313
+ metrics:
314
+ - type: Accuracy
315
+ value: 56.83
316
+ - task:
317
+ type: Natural language inference
318
+ dataset:
319
+ type: xnli
320
+ name: XNLI (es)
321
+ config: es
322
+ split: validation
323
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
324
+ metrics:
325
+ - type: Accuracy
326
+ value: 54.78
327
+ - task:
328
+ type: Natural language inference
329
+ dataset:
330
+ type: xnli
331
+ name: XNLI (fr)
332
+ config: fr
333
+ split: validation
334
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
335
+ metrics:
336
+ - type: Accuracy
337
+ value: 54.22
338
+ - task:
339
+ type: Natural language inference
340
+ dataset:
341
+ type: xnli
342
+ name: XNLI (hi)
343
+ config: hi
344
+ split: validation
345
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
346
+ metrics:
347
+ - type: Accuracy
348
+ value: 50.24
349
+ - task:
350
+ type: Natural language inference
351
+ dataset:
352
+ type: xnli
353
+ name: XNLI (ru)
354
+ config: ru
355
+ split: validation
356
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
357
+ metrics:
358
+ - type: Accuracy
359
+ value: 53.09
360
+ - task:
361
+ type: Natural language inference
362
+ dataset:
363
+ type: xnli
364
+ name: XNLI (sw)
365
+ config: sw
366
+ split: validation
367
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
368
+ metrics:
369
+ - type: Accuracy
370
+ value: 49.6
371
+ - task:
372
+ type: Natural language inference
373
+ dataset:
374
+ type: xnli
375
+ name: XNLI (th)
376
+ config: th
377
+ split: validation
378
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
379
+ metrics:
380
+ - type: Accuracy
381
+ value: 52.13
382
+ - task:
383
+ type: Natural language inference
384
+ dataset:
385
+ type: xnli
386
+ name: XNLI (tr)
387
+ config: tr
388
+ split: validation
389
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
390
+ metrics:
391
+ - type: Accuracy
392
+ value: 50.56
393
+ - task:
394
+ type: Natural language inference
395
+ dataset:
396
+ type: xnli
397
+ name: XNLI (ur)
398
+ config: ur
399
+ split: validation
400
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
401
+ metrics:
402
+ - type: Accuracy
403
+ value: 47.91
404
+ - task:
405
+ type: Natural language inference
406
+ dataset:
407
+ type: xnli
408
+ name: XNLI (vi)
409
+ config: vi
410
+ split: validation
411
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
412
+ metrics:
413
+ - type: Accuracy
414
+ value: 53.21
415
+ - task:
416
+ type: Natural language inference
417
+ dataset:
418
+ type: xnli
419
+ name: XNLI (zh)
420
+ config: zh
421
+ split: validation
422
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
423
+ metrics:
424
+ - type: Accuracy
425
+ value: 50.64
426
+ - task:
427
+ type: Program synthesis
428
+ dataset:
429
+ type: openai_humaneval
430
+ name: HumanEval
431
+ config: None
432
+ split: test
433
+ revision: e8dc562f5de170c54b5481011dd9f4fa04845771
434
+ metrics:
435
+ - type: Pass@1
436
+ value: 0.00
437
+ - type: Pass@10
438
+ value: 0.00
439
+ - type: Pass@100
440
+ value: 0.00
441
+ - task:
442
+ type: Sentence completion
443
+ dataset:
444
+ type: story_cloze
445
+ name: StoryCloze (2016)
446
+ config: "2016"
447
+ split: validation
448
+ revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
449
+ metrics:
450
+ - type: Accuracy
451
+ value: 79.1
452
+ - task:
453
+ type: Sentence completion
454
+ dataset:
455
+ type: super_glue
456
+ name: SuperGLUE (copa)
457
+ config: copa
458
+ split: validation
459
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
460
+ metrics:
461
+ - type: Accuracy
462
+ value: 72.0
463
+ - task:
464
+ type: Sentence completion
465
+ dataset:
466
+ type: xcopa
467
+ name: XCOPA (et)
468
+ config: et
469
+ split: validation
470
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
471
+ metrics:
472
+ - type: Accuracy
473
+ value: 70.0
474
+ - task:
475
+ type: Sentence completion
476
+ dataset:
477
+ type: xcopa
478
+ name: XCOPA (ht)
479
+ config: ht
480
+ split: validation
481
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
482
+ metrics:
483
+ - type: Accuracy
484
+ value: 66.0
485
+ - task:
486
+ type: Sentence completion
487
+ dataset:
488
+ type: xcopa
489
+ name: XCOPA (id)
490
+ config: id
491
+ split: validation
492
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
493
+ metrics:
494
+ - type: Accuracy
495
+ value: 71.0
496
+ - task:
497
+ type: Sentence completion
498
+ dataset:
499
+ type: xcopa
500
+ name: XCOPA (it)
501
+ config: it
502
+ split: validation
503
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
504
+ metrics:
505
+ - type: Accuracy
506
+ value: 70.0
507
+ - task:
508
+ type: Sentence completion
509
+ dataset:
510
+ type: xcopa
511
+ name: XCOPA (qu)
512
+ config: qu
513
+ split: validation
514
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
515
+ metrics:
516
+ - type: Accuracy
517
+ value: 56.0
518
+ - task:
519
+ type: Sentence completion
520
+ dataset:
521
+ type: xcopa
522
+ name: XCOPA (sw)
523
+ config: sw
524
+ split: validation
525
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
526
+ metrics:
527
+ - type: Accuracy
528
+ value: 53.0
529
+ - task:
530
+ type: Sentence completion
531
+ dataset:
532
+ type: xcopa
533
+ name: XCOPA (ta)
534
+ config: ta
535
+ split: validation
536
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
537
+ metrics:
538
+ - type: Accuracy
539
+ value: 64.0
540
+ - task:
541
+ type: Sentence completion
542
+ dataset:
543
+ type: xcopa
544
+ name: XCOPA (th)
545
+ config: th
546
+ split: validation
547
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
548
+ metrics:
549
+ - type: Accuracy
550
+ value: 60.0
551
+ - task:
552
+ type: Sentence completion
553
+ dataset:
554
+ type: xcopa
555
+ name: XCOPA (tr)
556
+ config: tr
557
+ split: validation
558
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
559
+ metrics:
560
+ - type: Accuracy
561
+ value: 58.0
562
+ - task:
563
+ type: Sentence completion
564
+ dataset:
565
+ type: xcopa
566
+ name: XCOPA (vi)
567
+ config: vi
568
+ split: validation
569
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
570
+ metrics:
571
+ - type: Accuracy
572
+ value: 68.0
573
+ - task:
574
+ type: Sentence completion
575
+ dataset:
576
+ type: xcopa
577
+ name: XCOPA (zh)
578
+ config: zh
579
+ split: validation
580
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
581
+ metrics:
582
+ - type: Accuracy
583
+ value: 65.0
584
+ - task:
585
+ type: Sentence completion
586
+ dataset:
587
+ type: Muennighoff/xstory_cloze
588
+ name: XStoryCloze (ar)
589
+ config: ar
590
+ split: validation
591
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
592
+ metrics:
593
+ - type: Accuracy
594
+ value: 70.09
595
+ - task:
596
+ type: Sentence completion
597
+ dataset:
598
+ type: Muennighoff/xstory_cloze
599
+ name: XStoryCloze (es)
600
+ config: es
601
+ split: validation
602
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
603
+ metrics:
604
+ - type: Accuracy
605
+ value: 77.17
606
+ - task:
607
+ type: Sentence completion
608
+ dataset:
609
+ type: Muennighoff/xstory_cloze
610
+ name: XStoryCloze (eu)
611
+ config: eu
612
+ split: validation
613
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
614
+ metrics:
615
+ - type: Accuracy
616
+ value: 69.03
617
+ - task:
618
+ type: Sentence completion
619
+ dataset:
620
+ type: Muennighoff/xstory_cloze
621
+ name: XStoryCloze (hi)
622
+ config: hi
623
+ split: validation
624
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
625
+ metrics:
626
+ - type: Accuracy
627
+ value: 71.08
628
+ - task:
629
+ type: Sentence completion
630
+ dataset:
631
+ type: Muennighoff/xstory_cloze
632
+ name: XStoryCloze (id)
633
+ config: id
634
+ split: validation
635
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
636
+ metrics:
637
+ - type: Accuracy
638
+ value: 75.71
639
+ - task:
640
+ type: Sentence completion
641
+ dataset:
642
+ type: Muennighoff/xstory_cloze
643
+ name: XStoryCloze (my)
644
+ config: my
645
+ split: validation
646
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
647
+ metrics:
648
+ - type: Accuracy
649
+ value: 65.65
650
+ - task:
651
+ type: Sentence completion
652
+ dataset:
653
+ type: Muennighoff/xstory_cloze
654
+ name: XStoryCloze (ru)
655
+ config: ru
656
+ split: validation
657
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
658
+ metrics:
659
+ - type: Accuracy
660
+ value: 74.85
661
+ - task:
662
+ type: Sentence completion
663
+ dataset:
664
+ type: Muennighoff/xstory_cloze
665
+ name: XStoryCloze (sw)
666
+ config: sw
667
+ split: validation
668
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
669
+ metrics:
670
+ - type: Accuracy
671
+ value: 71.14
672
+ - task:
673
+ type: Sentence completion
674
+ dataset:
675
+ type: Muennighoff/xstory_cloze
676
+ name: XStoryCloze (te)
677
+ config: te
678
+ split: validation
679
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
680
+ metrics:
681
+ - type: Accuracy
682
+ value: 68.89
683
+ - task:
684
+ type: Sentence completion
685
+ dataset:
686
+ type: Muennighoff/xstory_cloze
687
+ name: XStoryCloze (zh)
688
+ config: zh
689
+ split: validation
690
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
691
+ metrics:
692
+ - type: Accuracy
693
+ value: 72.93
694
+ ---
695
+
696
+ ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true)
697
+
698
+ # Table of Contents
699
+
700
+ 1. [Model Summary](#model-summary)
701
+ 2. [Use](#use)
702
+ 3. [Limitations](#limitations)
703
+ 4. [Training](#training)
704
+ 5. [Evaluation](#evaluation)
705
+ 7. [Citation](#citation)
706
+
707
+ # Model Summary
708
+
709
+ > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
710
+
711
+ - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
712
+ - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
713
+ - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
714
+ - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
715
+ - **BLOOMZ & mT0 Model Family:**
716
+
717
+ <table>
718
+ <tr>
719
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
720
+ </tr>
721
+ <tr>
722
+ <td>Parameters</td>
723
+ <td>300M</td>
724
+ <td>580M</td>
725
+ <td>1.2B</td>
726
+ <td>3.7B</td>
727
+ <td>13B</td>
728
+ <td>560M</td>
729
+ <td>1.1B</td>
730
+ <td>1.7B</td>
731
+ <td>3B</td>
732
+ <td>7.1B</td>
733
+ <td>176B</td>
734
+ </tr>
735
+ <tr>
736
+ <td>Finetuned Model</td>
737
+ <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
738
+ <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
739
+ <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
740
+ <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
741
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
742
+ <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
743
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
744
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
745
+ <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
746
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
747
+ <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
748
+ </tr>
749
+ </tr>
750
+ <tr>
751
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
752
+ </tr>
753
+ <tr>
754
+ <td>Finetuned Model</td>
755
+ <td></td>
756
+ <td></td>
757
+ <td></td>
758
+ <td></td>
759
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
760
+ <td></td>
761
+ <td></td>
762
+ <td></td>
763
+ <td></td>
764
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
765
+ <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
766
+ </tr>
767
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
768
+ </tr>
769
+ <tr>
770
+ <td>Finetuned Model</td>
771
+ <td></td>
772
+ <td></td>
773
+ <td></td>
774
+ <td></td>
775
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
776
+ <td></td>
777
+ <td></td>
778
+ <td></td>
779
+ <td></td>
780
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
781
+ <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
782
+ </tr>
783
+ <th colspan="12">Original pretrained checkpoints. Not recommended.</th>
784
+ <tr>
785
+ <td>Pretrained Model</td>
786
+ <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
787
+ <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
788
+ <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
789
+ <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
790
+ <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
791
+ <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
792
+ <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
793
+ <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
794
+ <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
795
+ <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
796
+ <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
797
+ </tr>
798
+ </table>
799
+
800
+
801
+ # Use
802
+
803
+ ## Intended use
804
+
805
+ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
806
+ - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
807
+ - Suggest at least five related search terms to "Mạng neural nhân tạo".
808
+ - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
809
+ - Explain in a sentence in Telugu what is backpropagation in neural networks.
810
+
811
+ **Feel free to share your generations in the Community tab!**
812
+
813
+ ## How to use
814
+
815
+ ### CPU
816
+
817
+ <details>
818
+ <summary> Click to expand </summary>
819
+
820
+ ```python
821
+ # pip install -q transformers
822
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
823
+
824
+ checkpoint = "bigscience/mt0-xl"
825
+
826
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
827
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
828
+
829
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
830
+ outputs = model.generate(inputs)
831
+ print(tokenizer.decode(outputs[0]))
832
+ ```
833
+
834
+ </details>
835
+
836
+ ### GPU
837
+
838
+ <details>
839
+ <summary> Click to expand </summary>
840
+
841
+ ```python
842
+ # pip install -q transformers accelerate
843
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
844
+
845
+ checkpoint = "bigscience/mt0-xl"
846
+
847
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
848
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
849
+
850
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
851
+ outputs = model.generate(inputs)
852
+ print(tokenizer.decode(outputs[0]))
853
+ ```
854
+
855
+ </details>
856
+
857
+ ### GPU in 8bit
858
+
859
+ <details>
860
+ <summary> Click to expand </summary>
861
+
862
+ ```python
863
+ # pip install -q transformers accelerate bitsandbytes
864
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
865
+
866
+ checkpoint = "bigscience/mt0-xl"
867
+
868
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
869
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
870
+
871
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
872
+ outputs = model.generate(inputs)
873
+ print(tokenizer.decode(outputs[0]))
874
+ ```
875
+
876
+ </details>
877
+
878
+ <!-- Necessary for whitespace -->
879
+ ###
880
+
881
+ # Limitations
882
+
883
+ **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
884
+
885
+ # Training
886
+
887
+ ## Model
888
+
889
+ - **Architecture:** Same as [mt5-xl](https://huggingface.co/google/mt5-xl), also refer to the `config.json` file
890
+ - **Finetuning steps:** 7000
891
+ - **Finetuning tokens:** 1.29 billion
892
+ - **Precision:** bfloat16
893
+
894
+ ## Hardware
895
+
896
+ - **TPUs:** TPUv4-256
897
+
898
+ ## Software
899
+
900
+ - **Orchestration:** [T5X](https://github.com/google-research/t5x)
901
+ - **Neural networks:** [Jax](https://github.com/google/jax)
902
+
903
+ # Evaluation
904
+
905
+ We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
906
+
907
+ # Citation
908
+ ```bibtex
909
+ @misc{muennighoff2022crosslingual,
910
+ title={Crosslingual Generalization through Multitask Finetuning},
911
+ author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
912
+ year={2022},
913
+ eprint={2211.01786},
914
+ archivePrefix={arXiv},
915
+ primaryClass={cs.CL}
916
+ }
917
+ ```