File size: 4,115 Bytes
fdd9e00
5ac445e
b9dc41d
6cc1436
b9dc41d
 
fdd9e00
5ac445e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5eb12d
69f2cad
5ac445e
 
fdd9e00
5ac445e
 
 
fdd9e00
5ac445e
 
 
 
 
 
 
fdd9e00
5ac445e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fdd9e00
 
 
 
5ac445e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
license: apache-2.0
language: 
  - en
  - ru
  - multilingual
---


# Model Card for xlm-roberta-large-qa-multilingual-finedtuned-ru 
 
# Model Details
 
## Model Description
 
More information needed  
 
- **Developed by:** Alexander Kaigorodov
- **Shared by [Optional]:** Alexander Kaigorodov
- **Model type:** Question Answering 
- **Language(s) (NLP):** English, Russian, Multilingual
- **License:** Apache 2.0 
- **Parent Model:** XLM-RoBERTa
- **Resources for more information:**
   - [Associated Paper](https://arxiv.org/pdf/1912.09723.pdf) 	


# Uses
 

## Direct Use
This model can be used for the task of question answering.
 
## Downstream Use [Optional]
 
More information needed.
 
## Out-of-Scope Use
 
The model should not be used to intentionally create hostile or alienating environments for people. 
 
# Bias, Risks, and Limitations
 
 
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.



## Recommendations
 
 
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

# Training Details
 
## Training Data
### XLM-RoBERTa large model whole word masking finetuned on SQuAD
Pretrained model using a masked language modeling (MLM) objective. 
Fine tuned on English and Russian QA datasets
 
### Used QA Datasets
SQuAD + SberQuAD
 
 
## Training Procedure

 
### Preprocessing
 
More information needed 
 
### Speeds, Sizes, Times
More information needed 

 
# Evaluation
 
 
## Testing Data, Factors & Metrics
 
### Testing Data
 
More information needed 
 
 
### Factors
More information needed
 
### Metrics
 
More information needed
 
 
## Results 
The results obtained are the following (SberQUaD):
```
f1 = 84.3
exact_match = 65.3
```

 
# Model Examination
 
More information needed
 
# Environmental Impact
 
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
 
# Technical Specifications [optional]
 
## Model Architecture and Objective

More information needed 
 
## Compute Infrastructure
 
More information needed 
 
### Hardware
 
 
More information needed
 
### Software
 
More information needed.
 
# Citation

 
**BibTeX:**
 
 
```bibtex
@incollection{Efimov_2020,
	doi = {10.1007/978-3-030-58219-7_1},
  
	url = {https://doi.org/10.1007%2F978-3-030-58219-7_1},
  
	year = 2020,
	publisher = {Springer International Publishing},
  
	pages = {3--15},
  
	author = {Pavel Efimov and Andrey Chertok and Leonid Boytsov and Pavel Braslavski},
  
	title = {{SberQuAD} {\textendash} Russian Reading Comprehension Dataset: Description and Analysis},
  
	booktitle = {Lecture Notes in Computer Science}
}
```
 
 
 
 
# Glossary [optional]
More information needed 
 
# More Information [optional]
More information needed 

 
# Model Card Authors [optional]
 
Alexander Kaigorodov in collaboration with Ezi Ozoani and the Hugging Face team


# Model Card Contact
 
More information needed
 
# How to Get Started with the Model
 
Use the code below to get started with the model.
 
<details>
<summary> Click to expand </summary>

```python
 from transformers import AutoTokenizer, AutoModelForQuestionAnswering

tokenizer = AutoTokenizer.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru")

model = AutoModelForQuestionAnswering.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru")
 ```
</details>