ai-forever commited on
Commit
2bb66cb
1 Parent(s): f397ab4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -6,11 +6,37 @@ pipeline_tag: text-generation
6
  tags:
7
  - PyTorch
8
  - Transformers
 
 
 
 
9
  thumbnail: "https://github.com/sberbank-ai/mgpt"
10
  ---
11
 
12
  # Multilingual GPT model
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  Model was trained with sequence length 1024 using transformers lib by [SberDevices](https://sberdevices.ru/) team on 80B tokens for 3 epochs. After that model was finetuned 1 epoch with sequence length 2048.
15
 
16
  Total training time was around n days on n GPUs for n context and few days on n GPUs for n context.
 
6
  tags:
7
  - PyTorch
8
  - Transformers
9
+ - gpt3
10
+ - gpt2
11
+ - Deepspeed
12
+ - Megatron
13
  thumbnail: "https://github.com/sberbank-ai/mgpt"
14
  ---
15
 
16
  # Multilingual GPT model
17
 
18
+ We introduce family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus.
19
+
20
+ We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron]() frameworks allows us to effectively parallelize the training and inference steps. Resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhance NLP possibilities for low resource languages.
21
+
22
+ ## Code
23
+ The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt)
24
+
25
+ ## Paper
26
+ [Arxiv preprint](https://arxiv.org/user)
27
+
28
+ Cite us:
29
+ ```{
30
+ bibtex
31
+ }
32
+ ```
33
+
34
+ ## Languages
35
+
36
+ ## Training Data Statistics
37
+
38
+
39
+ ## Details
40
  Model was trained with sequence length 1024 using transformers lib by [SberDevices](https://sberdevices.ru/) team on 80B tokens for 3 epochs. After that model was finetuned 1 epoch with sequence length 2048.
41
 
42
  Total training time was around n days on n GPUs for n context and few days on n GPUs for n context.