model documentation

#2
by nazneen - opened
Files changed (1) hide show
  1. README.md +178 -6
README.md CHANGED
@@ -1,21 +1,193 @@
1
  ---
 
2
  language:
3
  - en
4
  - ru
5
  - multilingual
6
- license: apache-2.0
7
  ---
8
- # XLM-RoBERTa large model whole word masking finetuned on SQuAD
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  Pretrained model using a masked language modeling (MLM) objective.
10
  Fine tuned on English and Russian QA datasets
11
-
12
- ## Used QA Datasets
13
  SQuAD + SberQuAD
 
 
 
14
 
15
- [SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read!
 
 
 
 
 
 
16
 
17
- ## Evaluation results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  The results obtained are the following (SberQUaD):
19
  ```
20
  f1 = 84.3
21
  exact_match = 65.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  language:
4
  - en
5
  - ru
6
  - multilingual
 
7
  ---
8
+
9
+
10
+ # Model Card for xlm-roberta-large-qa-multilingual-finedtuned-ru
11
+
12
+ # Model Details
13
+
14
+ ## Model Description
15
+
16
+ More information needed
17
+
18
+ - **Developed by:** Alexander Kaigorodov
19
+ - **Shared by [Optional]:** Alexander Kaigorodov
20
+ - **Model type:** Question Answering
21
+ - **Language(s) (NLP):** English, Russian, Multilingual
22
+ - **License:** Apache 2.0
23
+ - **Parent Model:** XLM-RoBERTa
24
+ - **Resources for more information:**
25
+ - [Associated Paper](https://arxiv.org/pdf/1912.09723.pdf)
26
+
27
+
28
+ # Uses
29
+
30
+
31
+ ## Direct Use
32
+ This model can be used for the task of question answering.
33
+
34
+ ## Downstream Use [Optional]
35
+
36
+ More information needed.
37
+
38
+ ## Out-of-Scope Use
39
+
40
+ The model should not be used to intentionally create hostile or alienating environments for people.
41
+
42
+ # Bias, Risks, and Limitations
43
+
44
+
45
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
46
+
47
+
48
+
49
+ ## Recommendations
50
+
51
+
52
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
53
+
54
+ # Training Details
55
+
56
+ ## Training Data
57
+ ### XLM-RoBERTa large model whole word masking finetuned on SQuAD
58
  Pretrained model using a masked language modeling (MLM) objective.
59
  Fine tuned on English and Russian QA datasets
60
+
61
+ ### Used QA Datasets
62
  SQuAD + SberQuAD
63
+
64
+
65
+ ## Training Procedure
66
 
67
+
68
+ ### Preprocessing
69
+
70
+ More information needed
71
+
72
+ ### Speeds, Sizes, Times
73
+ More information needed
74
 
75
+
76
+ # Evaluation
77
+
78
+
79
+ ## Testing Data, Factors & Metrics
80
+
81
+ ### Testing Data
82
+
83
+ More information needed
84
+
85
+
86
+ ### Factors
87
+ More information needed
88
+
89
+ ### Metrics
90
+
91
+ More information needed
92
+
93
+
94
+ ## Results
95
  The results obtained are the following (SberQUaD):
96
  ```
97
  f1 = 84.3
98
  exact_match = 65.3
99
+ ```
100
+
101
+
102
+ # Model Examination
103
+
104
+ More information needed
105
+
106
+ # Environmental Impact
107
+
108
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
109
+
110
+ - **Hardware Type:** More information needed
111
+ - **Hours used:** More information needed
112
+ - **Cloud Provider:** More information needed
113
+ - **Compute Region:** More information needed
114
+ - **Carbon Emitted:** More information needed
115
+
116
+ # Technical Specifications [optional]
117
+
118
+ ## Model Architecture and Objective
119
+
120
+ More information needed
121
+
122
+ ## Compute Infrastructure
123
+
124
+ More information needed
125
+
126
+ ### Hardware
127
+
128
+
129
+ More information needed
130
+
131
+ ### Software
132
+
133
+ More information needed.
134
+
135
+ # Citation
136
+
137
+
138
+ **BibTeX:**
139
+
140
+
141
+ ```bibtex
142
+ @incollection{Efimov_2020,
143
+ doi = {10.1007/978-3-030-58219-7_1},
144
+
145
+ url = {https://doi.org/10.1007%2F978-3-030-58219-7_1},
146
+
147
+ year = 2020,
148
+ publisher = {Springer International Publishing},
149
+
150
+ pages = {3--15},
151
+
152
+ author = {Pavel Efimov and Andrey Chertok and Leonid Boytsov and Pavel Braslavski},
153
+
154
+ title = {{SberQuAD} {\textendash} Russian Reading Comprehension Dataset: Description and Analysis},
155
+
156
+ booktitle = {Lecture Notes in Computer Science}
157
+ }
158
+ ```
159
+
160
+
161
+
162
+
163
+ # Glossary [optional]
164
+ More information needed
165
+
166
+ # More Information [optional]
167
+ More information needed
168
+
169
+
170
+ # Model Card Authors [optional]
171
+
172
+ Alexander Kaigorodov in collaboration with Ezi Ozoani and the Hugging Face team
173
+
174
+
175
+ # Model Card Contact
176
+
177
+ More information needed
178
+
179
+ # How to Get Started with the Model
180
+
181
+ Use the code below to get started with the model.
182
+
183
+ <details>
184
+ <summary> Click to expand </summary>
185
+
186
+ ```python
187
+ from transformers import AutoTokenizer, AutoModelForQuestionAnswering
188
+
189
+ tokenizer = AutoTokenizer.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru")
190
+
191
+ model = AutoModelForQuestionAnswering.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru")
192
+ ```
193
+ </details>