Qwen
/

Text Generation
Transformers
Safetensors
Chinese
English
qwen
custom_code
yangapku commited on
Commit
02d70ca
1 Parent(s): e7ecd2a

update quickusage

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -76,7 +76,10 @@ from transformers.generation import GenerationConfig
76
  # Note: our tokenizer rejects attacks and so that you cannot input special tokens like <|endoftext|> or it will throw an error.
77
  # To remove the strategy, you can add `allowed_special`, which accepts the string "all" or a `set` of special tokens.
78
  # For example: tokens = tokenizer(text, allowed_special="all")
79
- tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
 
 
 
80
  # use bf16
81
  # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, bf16=True).eval()
82
  # use fp16
 
76
  # Note: our tokenizer rejects attacks and so that you cannot input special tokens like <|endoftext|> or it will throw an error.
77
  # To remove the strategy, you can add `allowed_special`, which accepts the string "all" or a `set` of special tokens.
78
  # For example: tokens = tokenizer(text, allowed_special="all")
79
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
80
+ # We recommend checking the support of BF16 first. Run the command below:
81
+ # import torch
82
+ # torch.cuda.is_bf16_supported()
83
  # use bf16
84
  # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, bf16=True).eval()
85
  # use fp16