|
--- |
|
datasets: |
|
- bs-la/xP3ru |
|
license: bigscience-bloom-rail-1.0 |
|
model-index: |
|
- name: bloomz-7b1 |
|
results: |
|
- task: |
|
type: Coreference resolution |
|
dataset: |
|
type: Muennighoff/xwinograd |
|
name: XWinograd (ru) |
|
config: ru |
|
split: test |
|
revision: 9dd5ea5505fad86b7bedad667955577815300cee |
|
metrics: |
|
- type: Accuracy |
|
value: 54.29 |
|
- task: |
|
type: Natural language inference |
|
dataset: |
|
type: xnli |
|
name: XNLI (ru) |
|
config: ru |
|
split: validation |
|
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 |
|
metrics: |
|
- type: Accuracy |
|
value: 34.62 |
|
- task: |
|
type: Sentence completion |
|
dataset: |
|
type: Muennighoff/xstory_cloze |
|
name: XStoryCloze (ru) |
|
config: ru |
|
split: validation |
|
revision: 8bb76e594b68147f1a430e86829d07189622b90d |
|
metrics: |
|
- type: Accuracy |
|
value: 55.99 |
|
--- |
|
|
|
# Model Summary |
|
|
|
[bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) finetuned on Russian multitask data. Hence the same as [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1), but with **only** Russian finetuning data. 500m stands for 500 million finetuning tokens. |
|
|
|
# Citation |
|
|
|
``` |
|
BLOOM+1 - TODO |
|
``` |
|
|
|
```bibtex |
|
@misc{muennighoff2022crosslingual, |
|
title={Crosslingual Generalization through Multitask Finetuning}, |
|
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, |
|
year={2022}, |
|
eprint={2211.01786}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |