mymusise commited on
Commit
d5291e3
1 Parent(s): 1611f57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -19
README.md CHANGED
@@ -2,7 +2,7 @@
2
  language: zh
3
  widget:
4
  - text: "天下熙熙,"
5
- - text: "天气不错,"
6
  ---
7
 
8
  <h1 align="center">
@@ -22,27 +22,15 @@ And the `CPM-Generate-distill` is the distill model of `CPM`.
22
  How to use this model directly from the 🤗/transformers library:
23
 
24
  ```python
25
- from transformers import XLNetTokenizer, TFGPT2LMHeadModel
26
- from transformers import TextGenerationPipeline
27
- import jieba
28
- # add spicel process
29
- class XLNetTokenizer(XLNetTokenizer):
30
- translator = str.maketrans(" \n", "\u2582\u2583")
31
- def _tokenize(self, text, *args, **kwargs):
32
- text = [x.translate(self.translator) for x in jieba.cut(text, cut_all=False)]
33
- text = " ".join(text)
34
- return super()._tokenize(text, *args, **kwargs)
35
- def _decode(self, *args, **kwargs):
36
- text = super()._decode(*args, **kwargs)
37
- text = text.replace(' ', '').replace('\u2582', ' ').replace('\u2583', '\n')
38
- return text
39
-
40
- tokenizer = XLNetTokenizer.from_pretrained('mymusise/CPM-Generate-distill')
41
- model = TFGPT2LMHeadModel.from_pretrained("mymusise/CPM-Generate-distill")
42
 
43
  text_generater = TextGenerationPipeline(model, tokenizer)
44
 
45
- print(text_generater("天下熙熙,", max_length=15, top_k=1, use_cache=True, prefix=''))
46
  ```
47
 
 
48
  ![avatar](https://github.com/mymusise/CPM-TF2Transformer/raw/main/example-cpm-distill.jpeg)
 
2
  language: zh
3
  widget:
4
  - text: "天下熙熙,"
5
+ - text: "清华大学是"
6
  ---
7
 
8
  <h1 align="center">
 
22
  How to use this model directly from the 🤗/transformers library:
23
 
24
  ```python
25
+ from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead
26
+
27
+ tokenizer = AutoTokenizer.from_pretrained("mymusise/CPM-Generate-distill")
28
+ model = AutoModelWithLMHead.from_pretrained("mymusise/CPM-Generate-distill")
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  text_generater = TextGenerationPipeline(model, tokenizer)
31
 
32
+ print(text_generator('清华大学是, max_length=50, do_sample=True, top_p=0.9))
33
  ```
34
 
35
+
36
  ![avatar](https://github.com/mymusise/CPM-TF2Transformer/raw/main/example-cpm-distill.jpeg)