nicholascao commited on
Commit
39f2c92
2 Parent(s): 1c8d432 0ab884d

Merge branch 'main' of https://huggingface.co/nicholascao/chatbloom-1b7-sft into main

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -35,16 +35,17 @@ See [Github](https://github.com/NicholasCao/ChatBloom) for details.
35
  ## Usage
36
  ```python
37
  import torch
38
- from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
39
 
40
  tokenizer = AutoTokenizer.from_pretrained('nicholascao/chatbloom-1b7-sft')
 
 
41
  model = AutoModelForCausalLM.from_pretrained('nicholascao/chatbloom-1b7-sft').half()
42
- generation_config = GenerationConfig.from_pretrained('nicholascao/chatbloom-1b7-sft')
43
 
44
  inputs = tokenizer('<Human>: Hello <eoh> <Assistant>:', return_tensors='pt').to(torch.cuda.current_device())
45
  model.to(torch.cuda.current_device())
46
 
47
- output = model.generate(**inputs, generation_config=generation_config)
48
  output = tokenizer.decode(output[0], skip_special_tokens=True)
49
  print(output)
50
  ```
 
35
  ## Usage
36
  ```python
37
  import torch
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM
39
 
40
  tokenizer = AutoTokenizer.from_pretrained('nicholascao/chatbloom-1b7-sft')
41
+ tokenizer.pad_token_id = tokenizer.eos_token_id
42
+
43
  model = AutoModelForCausalLM.from_pretrained('nicholascao/chatbloom-1b7-sft').half()
 
44
 
45
  inputs = tokenizer('<Human>: Hello <eoh> <Assistant>:', return_tensors='pt').to(torch.cuda.current_device())
46
  model.to(torch.cuda.current_device())
47
 
48
+ output = model.generate(**inputs, max_length=768, do_sample=True, temperature=0.8, top_k=50, early_stopping=True, repetition_penalty=1.05)
49
  output = tokenizer.decode(output[0], skip_special_tokens=True)
50
  print(output)
51
  ```