julien-c HF staff commited on
Commit
2065d3e
1 Parent(s): 7f85b78

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/iarfmoose/roberta-small-bulgarian/README.md

Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: bg
3
+ ---
4
+
5
+ # RoBERTa-small-bulgarian
6
+
7
+
8
+ The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a smaller version of [RoBERTa-base-bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) with only 6 hidden layers, but similar performance.
9
+
10
+ ## Intended uses
11
+
12
+ This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
13
+
14
+ ## Limitations and bias
15
+
16
+ The training data is unfiltered text from the internet and may contain all sorts of biases.
17
+
18
+ ## Training data
19
+
20
+ This model was trained on the following data:
21
+ - [bg_dedup from OSCAR](https://oscar-corpus.com/)
22
+ - [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
23
+ - [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
24
+
25
+ ## Training procedure
26
+
27
+ The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
28
+
29
+ It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.