Update README.md
Browse files
README.md
CHANGED
@@ -15,14 +15,14 @@ widget:
|
|
15 |
|
16 |
# RoBERTa large model for Finnish
|
17 |
|
18 |
-
Pretrained model on Finnish language using a masked language modeling (MLM) objective.
|
19 |
[this paper](https://arxiv.org/abs/1907.11692) and first released in
|
20 |
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
|
21 |
makes a difference between finnish and Finnish.
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
-
RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
|
26 |
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
27 |
publicly available data) with an automatic process to generate inputs and labels from those texts.
|
28 |
|
|
|
15 |
|
16 |
# RoBERTa large model for Finnish
|
17 |
|
18 |
+
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
|
19 |
[this paper](https://arxiv.org/abs/1907.11692) and first released in
|
20 |
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
|
21 |
makes a difference between finnish and Finnish.
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
+
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
|
26 |
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
27 |
publicly available data) with an automatic process to generate inputs and labels from those texts.
|
28 |
|