Text Classification
Transformers
PyTorch
Bulgarian
bert
torch
rmihaylov commited on
Commit
30f80bf
·
1 Parent(s): bfb3a94

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - bg
5
+ license: mit
6
+ datasets:
7
+ - oscar
8
+ - chitanka
9
+ - wikipedia
10
+ tags:
11
+ - torch
12
+ ---
13
+
14
+ # BERT BASE (cased) finetuned on Bulgarian natural-language-inference data
15
+
16
+ Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
17
+ [this paper](https://arxiv.org/abs/1810.04805) and first released in
18
+ [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
19
+ between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
20
+
21
+ It was finetuned on private NLI Bulgarian data.
22
+
23
+ Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).