wangyulong commited on
Commit
73d95d9
1 Parent(s): 9ccfebc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,8 +16,8 @@ We pruned its vocabulary from 250880 to 46145 with Chinese corpus to reduce GPU
16
  ```python
17
  from transformers import BloomTokenizerFast, BloomForCausalLM
18
 
19
- tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-1.4b-zh')
20
- model = BloomForCausalLM.from_pretrained('Langboat/bloom-1.4b-zh')
21
 
22
  print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt'))))
23
  ```
 
16
  ```python
17
  from transformers import BloomTokenizerFast, BloomForCausalLM
18
 
19
+ tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-1b4-zh')
20
+ model = BloomForCausalLM.from_pretrained('Langboat/bloom-1b4-zh')
21
 
22
  print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt'))))
23
  ```