1 ---
2 language:
3 - "zh"
4 tags:
5 - "chinese"
6 - "token-classification"
7 - "pos"
8 - "wikipedia"
9 license: "apache-2.0"
10 pipeline_tag: "token-classification"
11 ---
12
13 # chinese-bert-wwm-ext-upos
14
15 ## Model Description
16
17 This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
18
19 ## How to Use
20
21 ```py
22 from transformers import AutoTokenizer,AutoModelForTokenClassification
23 tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
24 model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
25 ```
26
27 ## See Also
28
29 [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
30
31