Token Classification
Transformers
PyTorch
Bulgarian
bert
torch
rmihaylov commited on
Commit
7a79047
1 Parent(s): 0a6b5f7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - bg
5
+ license: mit
6
+ datasets:
7
+ - oscar
8
+ - chitanka
9
+ - wikipedia
10
+ tags:
11
+ - torch
12
+ ---
13
+
14
+ # BERT BASE (cased) finetuned on Bulgarian named-entity-recognition data
15
+
16
+ Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
17
+ [this paper](https://arxiv.org/abs/1810.04805) and first released in
18
+ [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
19
+ between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
20
+
21
+ It was finetuned on public named-entity-recognition Bulgarian data.
22
+
23
+ Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
24
+
25
+ ### How to use
26
+
27
+ Here is how to use this model in PyTorch:
28
+
29
+ ```python
30
+ >>> from transformers import pipeline
31
+ >>>
32
+ >>> model = pipeline(
33
+ >>> 'ner',
34
+ >>> model='rmihaylov/bert-base-ner-theseus-bg',
35
+ >>> tokenizer='rmihaylov/bert-base-ner-theseus-bg',
36
+ >>> device=0,
37
+ >>> revision=None)
38
+ >>> output = model('Здравей, аз се казвам Иван.')
39
+ >>> print(output)
40
+
41
+ [{'end': 26,
42
+ 'entity': 'B-PER',
43
+ 'index': 6,
44
+ 'score': 0.9937722,
45
+ 'start': 21,
46
+ 'word': '▁Иван'}]
47
+ ```