Garbled characters from Vicuna 7b-v1.5
#10
by
ChocoWu
- opened
I try to leverage Vicuna v1.5 for inference. However, when I input the prompt hi, how are you?
, I got the wrong answers like this:['hi, how are you? &=\\autoritéЉanesrices Villacci sobivent分нишalotsautoritéeusal kwietnehmardadrualal al']
.
Here is the basic information:
vicuna=7b-v1.5
fschat==0.2.21
transformers=4.35.2
The inference code is as follows:
import torch
import transformers
from transformers import LlamaForCausalLM, LlamaTokenizer
model_name = "../pretrained_ckpt/vicuna/7b-v1.5/"
tokenizer = LlamaTokenizer.from_pretrained(model_name, torch_dtype=torch.float16)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model = LlamaForCausalLM.from_pretrained(model_name).cuda()
prompt = 'hi, how are you?'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
print(input_ids)
generated_ids = model.generate(input_ids.to(model.device), max_length=30)
print(generated_ids)
Please use fastchat to apply the correct chat temploate. you can try this CLI command:
python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --debug
weichiang
changed discussion status to
closed