File size: 1,448 Bytes
74139a0
 
 
 
74d9958
74139a0
 
 
 
 
 
74d9958
74139a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d7be12
74139a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a15e292
74139a0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language:
- "be"
- "bg"
- "mk"
- "ru"
- "sr"
- "uk"
tags:
- "belarusian"
- "bulgarian"
- "macedonian"
- "russian"
- "serbian"
- "ukrainian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---

# bert-base-slavic-cyrillic-upos

## Model Description

This is a BERT model pre-trained with Slavic-Cyrillic ([UD_Belarusian](https://universaldependencies.org/be/) [UD_Bulgarian](https://universaldependencies.org/bg/) [UD_Russian](https://universaldependencies.org/ru/) [UD_Serbian](https://universaldependencies.org/treebanks/sr_set/) [UD_Ukrainian](https://universaldependencies.org/treebanks/uk_iu/)) for POS-tagging and dependency-parsing, derived from [ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).

## How to Use

```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```

or

```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```

## See Also

[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models