ai-forever
commited on
Commit
•
db42fcc
1
Parent(s):
bebd800
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
-
- ru, en
|
5 |
pipeline_tag: text-generation
|
6 |
tags:
|
7 |
- PyTorch
|
@@ -17,7 +17,7 @@ thumbnail: "https://github.com/sberbank-ai/mgpt"
|
|
17 |
|
18 |
We introduce family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus.
|
19 |
|
20 |
-
We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron]() frameworks allows us to effectively parallelize the training and inference steps. Resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhance NLP possibilities for low resource languages.
|
21 |
|
22 |
## Code
|
23 |
The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt)
|
@@ -33,8 +33,16 @@ bibtex
|
|
33 |
|
34 |
## Languages
|
35 |
|
|
|
|
|
|
|
36 |
## Training Data Statistics
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
## Details
|
40 |
Model was trained with sequence length 1024 using transformers lib by [SberDevices](https://sberdevices.ru/) team on 80B tokens for 3 epochs. After that model was finetuned 1 epoch with sequence length 2048.
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
+
- en, az, sw, af, ar, ba, be, bxr, bg, bn, cv, hy, da, de, el, es, eu, fa, fi, fr, he, hi, hu, kk, id, it, ja, ka, ky, ko, lt, lv, mn, ml, os, mr, ms, my, nl, ro, pl, pt, sah, ru, tg, sv, ta, te, tk, th, tr, tl, tt, tyv, uk, en, ur, vi, uz, yo, zh, xal
|
5 |
pipeline_tag: text-generation
|
6 |
tags:
|
7 |
- PyTorch
|
|
|
17 |
|
18 |
We introduce family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus.
|
19 |
|
20 |
+
We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. Resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhance NLP possibilities for low resource languages.
|
21 |
|
22 |
## Code
|
23 |
The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt)
|
|
|
33 |
|
34 |
## Languages
|
35 |
|
36 |
+
Model includes 60 languages: (iso codes)
|
37 |
+
```az, sw, af, ar, ba, be, bxr, bg, bn, cv, hy, da, de, el, es, eu, fa, fi, fr, he, hi, hu, kk, id, it, ja, ka, ky, ko, lt, lv, mn, ml, os, mr, ms, my, nl, ro, pl, pt, sah, ru, tg, sv, ta, te, tk, th, tr, tl, tt, tyv, uk, en, ur, vi, uz, yo, zh, xal```
|
38 |
+
|
39 |
## Training Data Statistics
|
40 |
|
41 |
+
- Tokens: 559B
|
42 |
+
|
43 |
+
<img style="text-align:center; display:block;" src="https://huggingface.co/sberbank-ai/mGPT/blob/main/stats.png" width="200">
|
44 |
+
"General training corpus statistics"
|
45 |
+
|
46 |
|
47 |
## Details
|
48 |
Model was trained with sequence length 1024 using transformers lib by [SberDevices](https://sberdevices.ru/) team on 80B tokens for 3 epochs. After that model was finetuned 1 epoch with sequence length 2048.
|