File size: 3,588 Bytes
e457630
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f91371a
e457630
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f91371a
 
e457630
 
 
 
 
f91371a
 
 
 
 
 
 
 
e457630
eb1bea4
e457630
eb1bea4
e457630
eb1bea4
e457630
7f770c3
 
 
 
 
 
 
 
 
 
 
 
e457630
eb1bea4
e457630
f91371a
 
 
e457630
f91371a
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
language:
- ru
- en
library_name: transformers
---

# RoBERTa-base from deepvk

<!-- Provide a quick summary of what the model is/does. -->

Pretrained bidirectional encoder for russian language. 

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->
Model was pretrained using standard MLM objective on a large text corpora including open social data, books, Wikipedia, webpages etc. 


- **Developed by:** VK Applied Research Team
- **Model type:** RoBERTa
- **Languages:** Mostly russian and small fraction of other languages
- **License:** Apache 2.0

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("deepvk/roberta-base")
model = AutoModel.from_pretrained("deepvk/roberta-base")

text = "Привет, мир!"

inputs = tokenizer(text, return_tensors='pt')
predictions = model(**inputs)
```

## Training Details

### Training Data

500gb of raw texts in total. Mix of the following data: Wikipedia, Books, Twitter comments, Pikabu, Proza.ru, Film subtitles, 
News websites, Social corpus.

### Training Procedure 

#### Training Hyperparameters

| Argument           | Value                |
|--------------------|----------------------|
| Training regime    | fp16 mixed precision |
| Training framework | Fairseq              |
| Optimizer          | Adam                 |
| Adam betas         | 0.9,0.98             |
| Adam eps           | 1e-6                 |
| Num training steps | 500k                 |

Model was trained using 8xA100 for ~22 days. 

#### Architecture details 

Standard RoBERTa-base parameters:

| Argument                | Value          |
|-------------------------|----------------|
|Activation function      | gelu           |
|Attention dropout        | 0.1            |
|Dropout                  | 0.1            |
|Encoder attention heads  | 12             |
|Encoder embed dim        | 768            |
|Encoder ffn embed dim    | 3,072          |
|Encoder layers           | 12             |
|Max positions            | 512            |
|Vocab size               | 50266          |
|Tokenizer type           | Bete-level BPE |

## Evaluation

Russian Super Glue dev set.

Best result across base size models in bold. 

| Модель                                                                 | RCB       |  PARus | MuSeRC  | TERRa | RUSSE   | RWSD    | DaNetQA | Результат |
|------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------|
| [vk-roberta-base](https://huggingface.co/deepvk/roberta-base)          | 0.46      |  0.56  | 0.679   | 0.769 | 0.960   | 0.569   | 0.658   | 0.665     |
| [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433     |  0.56  | 0.625   | 0.59  | 0.943   | 0.569   | 0.726   | 0.635     |
| [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base)       | 0.450     |**0.61**|**0.722**| 0.704 | 0.948   | 0.578   |**0.76** |**0.682**  |
| [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased)        | 0.467     |  0.57  | 0.587   | 0.704 | 0.953   |**0.583**| 0.737   | 0.657     |
| [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base)        | **0.491** |**0.61**| 0.663   | 0.769 |**0.962**| 0.574   | 0.678   | 0.678     |
| [sber-roberta-large](https://huggingface.co/ai-forever/ruRoberta-large)| 0.463     |  0.61  | 0.775   | 0.886 | 0.946   | 0.564   | 0.761   | 0.715     |