Sosnitskij commited on
Commit
1e2c450
1 Parent(s): be3da5e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - ar
5
+ - he
6
+ - vi
7
+ - id
8
+ - jv
9
+ - ms
10
+ - tl
11
+ - lv
12
+ - lt
13
+ - eu
14
+ - ml
15
+ - ta
16
+ - te
17
+ - hy
18
+ - bn
19
+ - mr
20
+ - hi
21
+ - ur
22
+ - af
23
+ - da
24
+ - en
25
+ - de
26
+ - sv
27
+ - fr
28
+ - it
29
+ - pt
30
+ - ro
31
+ - es
32
+ - el
33
+ - os
34
+ - tg
35
+ - fa
36
+ - ja
37
+ - ka
38
+ - ko
39
+ - th
40
+ - bxr
41
+ - xal
42
+ - mn
43
+ - sw
44
+ - yo
45
+ - be
46
+ - bg
47
+ - ru
48
+ - uk
49
+ - pl
50
+ - my
51
+ - uz
52
+ - ba
53
+ - kk
54
+ - ky
55
+ - tt
56
+ - az
57
+ - cv
58
+ - tr
59
+ - tk
60
+ - tyv
61
+ - sax
62
+ - et
63
+ - fi
64
+ - hu
65
+ tags:
66
+ - multilingual
67
+ - PyTorch
68
+ - Transformers
69
+ - gpt3
70
+ - gpt2
71
+ - transformers
72
+ ---
73
+ Original model: https://huggingface.co/ai-forever/mGPT-13B
74
+ # 🌻 mGPT 13B
75
+
76
+ Multilingual language model. This model was trained on the **61** languages from **25** language families (see the list below).
77
+
78
+ ## Dataset
79
+
80
+ Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.
81
+
82
+ Here is the table with number of tokens for each language in the pretraining corpus on a logarithmic scale:
83
+
84
+ ![](https://i.imgur.com/KSMfVX1.png)
85
+
86
+ ## Languages
87
+
88
+ Afrikaans (af), Arabic (ar), Armenian (hy), Azerbaijani (az), Basque (eu), Bashkir (ba), Belarusian (be), Bengali (bn), Bulgarian (bg), Burmese (my), Buryat (bxr), Chuvash (cv), Danish (da), English (en), Estonian (et), Finnish (fi), French (fr), Georgian (ka), German (de), Greek (el), Hebrew (he), Hindi (hi), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Javanese (jv), Kalmyk (xal), Kazakh (kk), Korean (ko), Kyrgyz (ky), Latvian (lv), Lithuanian (lt), Malay (ms), Malayalam (ml), Marathi (mr), Mongolian (mn), Ossetian (os), Persian (fa), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Spanish (es), Swedish (sv), Swahili (sw), Tatar (tt), Telugu (te), Thai (th), Turkish (tr), Turkmen (tk), Tuvan (tyv), Ukrainian (uk), Uzbek (uz), Vietnamese (vi), Yakut (sax), Yoruba (yo)
89
+
90
+ #### By language family
91
+
92
+ <table><thead><tr><th>Language Family</th><th>Languages</th></tr></thead><tbody><tr><td>Afro-Asiatic</td><td>Arabic (ar), Hebrew (he)</td></tr><tr><td>Austro-Asiatic</td><td>Vietnamese (vi)</td></tr><tr><td>Austronesian</td><td>Indonesian (id), Javanese (jv), Malay (ms), Tagalog (tl)</td></tr><tr><td>Baltic</td><td>Latvian (lv), Lithuanian (lt)</td></tr><tr><td>Basque</td><td>Basque (eu)</td></tr><tr><td>Dravidian</td><td>Malayalam (ml), Tamil (ta), Telugu (te)</td></tr><tr><td>Indo-European (Armenian)</td><td>Armenian (hy)</td></tr><tr><td>Indo-European (Indo-Aryan)</td><td>Bengali (bn), Marathi (mr), Hindi (hi), Urdu (ur)</td></tr><tr><td>Indo-European (Germanic)</td><td>Afrikaans (af), Danish (da), English (en), German (de), Swedish (sv)</td></tr><tr><td>Indo-European (Romance)</td><td>French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es)</td></tr><tr><td>Indo-European (Greek)</td><td>Greek (el)</td></tr><tr><td>Indo-European (Iranian)</td><td>Ossetian (os), Tajik (tg), Persian (fa)</td></tr><tr><td>Japonic</td><td>Japanese (ja)</td></tr><tr><td>Kartvelian</td><td>Georgian (ka)</td></tr><tr><td>Koreanic</td><td>Korean (ko)</td></tr><tr><td>Kra-Dai</td><td>Thai (th)</td></tr><tr><td>Mongolic</td><td>Buryat (bxr), Kalmyk (xal), Mongolian (mn)</td></tr><tr><td>Niger-Congo</td><td>Swahili (sw), Yoruba (yo)</td></tr><tr><td>Slavic</td><td>Belarusian (be), Bulgarian (bg), Russian (ru), Ukrainian (uk), Polish (pl)</td></tr><tr><td>Sino-Tibetan</td><td>Burmese (my)</td></tr><tr><td>Turkic (Karluk)</td><td>Uzbek (uz)</td></tr><tr><td>Turkic (Kipchak)</td><td>Bashkir (ba), Kazakh (kk), Kyrgyz (ky), Tatar (tt)</td></tr><tr><td>Turkic (Oghuz)</td><td>Azerbaijani (az), Chuvash (cv), Turkish (tr), Turkmen (tk)</td></tr><tr><td>Turkic (Siberian)</td><td>Tuvan (tyv), Yakut (sax)</td></tr><tr><td>Uralic</td><td>Estonian (et), Finnish (fi), Hungarian (hu)</td></tr></tbody></table>
93
+
94
+ ## Technical details
95
+
96
+ The models are pretrained on 16 V100 GPUs for 600k training steps with a set of fixed hyperparameters: vocabulary size of 100k, context window of 2048, learning rate of 2e−4, and batch size of 4.
97
+
98
+ The mGPT architecture is based on GPT-3. We use the architecture description by Brown et al., the code base on GPT-2 (Radford et al., 2019) in the HuggingFace library (Wolf et al., 2020) and Megatron-LM (Shoeybi et al., 2019).
99
+
100
+ ## Perplexity
101
+
102
+ The mGPT13B model achieves the best perplexities within the 2-to-10 score range for the majority of languages, including Dravidian (Malayalam, Tamil, Telugu), Indo-Aryan (Bengali, Hindi, Marathi), Slavic (Belarusian, Ukrainian, Russian, Bulgarian), Sino-Tibetan (Burmese), Kipchak (Bashkir, Kazakh) and others. Higher perplexities up to 20 are for only seven languages from different families.
103
+
104
+ #### Language-wise perplexity results
105
+
106
+ ![](https://i.imgur.com/aIKEpPE.png)
107
+
108
+ #### Family-wise perplexity results
109
+
110
+ ![](https://i.imgur.com/1ugWbXc.png)
111
+
112
+ _The scores are averaged over the number of languages within each family._
113
+