supermy commited on
Commit
7d8f9b4
1 Parent(s): 940026a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -13
README.md CHANGED
@@ -26,24 +26,21 @@ widget:
26
  使用 pipeline 调用模型:
27
 
28
  ```python
29
- >>> # 调用微调后的模型
30
- >>> senc="燕子归来,问昔日雕梁何处。 -"
31
- >>> model_id="couplet-gpt2-finetuning"
32
- >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
33
-
34
- >>> tokenizer = BertTokenizer.from_pretrained(model_id)
35
- >>> model = GPT2LMHeadModel.from_pretrained(model_id)
36
- >>> text_generator = TextGenerationPipeline(model, tokenizer)
37
- >>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
38
- >>> text_generator( senc,max_length=25, do_sample=True)
39
- [{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]
40
  ```
41
  Here is how to use this model to get the features of a given text in PyTorch:
42
 
43
  ```python
44
  from transformers import AutoTokenizer, AutoModelForCausalLM
45
- tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
46
- model = AutoModelForCausalLM.from_pretrained("supermy/couplet")
47
  ```
48
 
49
 
 
26
  使用 pipeline 调用模型:
27
 
28
  ```python
29
+ >>> task_prefix = ""
30
+ >>> sentence = task_prefix+"国色天香,姹紫嫣红,碧水青云欣共赏"
31
+ >>> model_output_dir='couplet-hel-mt5-finetuning/'
32
+ >>> from transformers import pipeline
33
+ >>> translation = pipeline("translation", model=model_output_dir)
34
+ >>> print(translation(sentence,max_length=28))
35
+ [{'translation_text': '月圆花好,良辰美景,良辰美景喜相逢'}]
36
+
 
 
 
37
  ```
38
  Here is how to use this model to get the features of a given text in PyTorch:
39
 
40
  ```python
41
  from transformers import AutoTokenizer, AutoModelForCausalLM
42
+ tokenizer = AutoTokenizer.from_pretrained("supermy/couplet-helsinki")
43
+ model = AutoModelForCausalLM.from_pretrained("supermy/couplet-helsinki")
44
  ```
45
 
46