Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
4 |
+
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes**
|
5 |
+
|
6 |
+
[NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
|
7 |
+
Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
|
8 |
+
|
9 |
+
The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch)
|
10 |
+
|
11 |
+
## Example Usage
|
12 |
+
|
13 |
+
```
|
14 |
+
from transformers import BertTokenizer, NezhaModel
|
15 |
+
tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-large')
|
16 |
+
model = NezhaModel.from_pretrained("sijunhe/nezha-cn-large")
|
17 |
+
text = "我爱北京天安门"
|
18 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
19 |
+
output = model(**encoded_input)
|
20 |
+
```
|