Update README.md
Browse filesUpdate How to use
README.md
CHANGED
@@ -22,7 +22,60 @@ You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Githu
|
|
22 |
|
23 |
## How to use
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Training data
|
28 |
|
|
|
22 |
|
23 |
## How to use
|
24 |
|
25 |
+
You can use this model directly with a pipeline for masked language modeling:
|
26 |
+
|
27 |
+
```python
|
28 |
+
>>> from transformers import pipeline
|
29 |
+
>>> unmasker = pipeline('fill-mask', model='hhou435/chinese_roberta_L-2_H-128')
|
30 |
+
>>> unmasker("中国的首都是[MASK]京。")
|
31 |
+
[
|
32 |
+
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
|
33 |
+
'score': 0.9427323937416077,
|
34 |
+
'token': 1266,
|
35 |
+
'token_str': '北'},
|
36 |
+
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
|
37 |
+
'score': 0.029202355071902275,
|
38 |
+
'token': 1298,
|
39 |
+
'token_str': '南'},
|
40 |
+
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
|
41 |
+
'score': 0.00977553054690361,
|
42 |
+
'token': 691,
|
43 |
+
'token_str': '东'},
|
44 |
+
{'sequence': '[CLS] 中 国 的 首 都 是 葡 京 。 [SEP]',
|
45 |
+
'score': 0.00489805219694972,
|
46 |
+
'token': 5868,
|
47 |
+
'token_str': '葡'},
|
48 |
+
{'sequence': '[CLS] 中 国 的 首 都 是 新 京 。 [SEP]',
|
49 |
+
'score': 0.0027360401581972837,
|
50 |
+
'token': 3173,
|
51 |
+
'token_str': '新'}
|
52 |
+
]
|
53 |
+
|
54 |
+
```
|
55 |
+
|
56 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
57 |
+
|
58 |
+
```python
|
59 |
+
from transformers import BertTokenizer, BertModel
|
60 |
+
tokenizer = BertTokenizer.from_pretrained('hhou435/chinese_roberta_L-2_H-128')
|
61 |
+
model = BertModel.from_pretrained("hhou435/chinese_roberta_L-2_H-128")
|
62 |
+
text = "用你喜欢的任何文本替换我。"
|
63 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
64 |
+
output = model(**encoded_input)
|
65 |
+
```
|
66 |
+
|
67 |
+
and in TensorFlow:
|
68 |
+
|
69 |
+
```python
|
70 |
+
from transformers import BertTokenizer, TFBertModel
|
71 |
+
tokenizer = BertTokenizer.from_pretrained('hhou435/chinese_roberta_L-2_H-128')
|
72 |
+
model = TFBertModel.from_pretrained("hhou435/chinese_roberta_L-2_H-128")
|
73 |
+
text = "用你喜欢的任何文本替换我。"
|
74 |
+
encoded_input = tokenizer(text, return_tensors='tf')
|
75 |
+
output = model(encoded_input)
|
76 |
+
```
|
77 |
+
|
78 |
+
|
79 |
|
80 |
## Training data
|
81 |
|