emfomy commited on
Commit
5323818
1 Parent(s): 1727d9e

Update README.

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -12,11 +12,36 @@ datasets:
12
  metrics:
13
  ---
14
 
15
- # CKIP ALBERT Tiny Chinese — Part-of-Speech Tagging
 
 
 
 
 
 
 
 
16
 
17
  ## Contributers
18
 
19
  * [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
20
 
21
- ## Attention
22
- Please Use `BertTokenizer` instead of `AutoTokenizer`!!!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  metrics:
13
  ---
14
 
15
+ # CKIP ALBERT Tiny Chinese
16
+
17
+ This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
18
+
19
+ 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
20
+
21
+ ## Homepage
22
+
23
+ * https://github.com/ckiplab/ckip-transformers
24
 
25
  ## Contributers
26
 
27
  * [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
28
 
29
+ ## Usage
30
+
31
+ Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
32
+
33
+ 請使用 BertTokenizerFast 而非 AutoTokenizer。
34
+
35
+ ```
36
+ from transformers import (
37
+ BertTokenizerFast,
38
+ AutoModelForTokenClassification,
39
+ )
40
+
41
+ tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
42
+ model = AutoModelForTokenClassification.from_pretrained('ckiplab/albert-tiny-chinese-pos')
43
+ ```
44
+
45
+ For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
46
+
47
+ 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。