WangZeJun commited on
Commit
f98b1f9
1 Parent(s): a043df4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -4,7 +4,7 @@ license: bigscience-bloom-rail-1.0
4
 
5
  https://github.com/zejunwang1/bloom_tuning
6
 
7
- 可以通过如下代码调用 bloom-396m-chat 模型来生成对话:
8
 
9
  ```python
10
  from transformers import BloomTokenizerFast, BloomForCausalLM
@@ -23,7 +23,8 @@ input_ids = input_ids.cuda()
23
  outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
24
  temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
25
 
26
- output = tokenizer.decode(outputs[0])
27
- response = output.replace(text, "").replace('</s>', "")
 
28
  print(response)
29
  ```
 
4
 
5
  https://github.com/zejunwang1/bloom_tuning
6
 
7
+ 可以通过如下代码调用 bloom-820m-chat 模型来生成对话:
8
 
9
  ```python
10
  from transformers import BloomTokenizerFast, BloomForCausalLM
 
23
  outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
24
  temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
25
 
26
+ input_ids_len = input_ids.size(1)
27
+ response_ids = outputs[0][input_ids_len:]
28
+ response = tokenizer.decode(response_ids)
29
  print(response)
30
  ```