The generated results are very random

#1
by jeremysun1224 - opened

The torch version I am using: 2.4.0a0+f70bd71a48.nv24.06 from nvidia container and transformers version is 4.42.4.

When I use you to generate the get started code given in HuggingFace Seallm3-7B-Chat, the generating results are not ideal.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
  "SeaLLMs/SeaLLM3-7B-chat",
  torch_dtype=torch.bfloat16, 
  device_map=device
)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM3-7B-chat")

# prepare messages to model
prompt = "What can you do for me?"
messages = [
    {"role": "system", "content": "You are an expert in parsing logistics addresses from the Philippines, specializing in converting addresses into structured JSON format based on specified fields."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True, temperature=0.8)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

print(f"Response:\n {response[0]}")
Response:
 I am an expert in parsing logistics addresses from the Philippines, specializing in converting addresses into structured JSON format based on specified fields.惢惢owellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowellcomeowell

more response

image.png

However, when I used your open-source demo in Huggingface space the model answered the same question very well. May I ask why?

image.png

SeaLLMs - Language Models for Southeast Asian Languages org

Hi, you can just set the eos_token_id in the model.generate() function, then it will become normal. See the below example with your provided code snippet:

image.png

Hi, you can just set the eos_token_id in the model.generate() function, then it will become normal. See the below example with your provided code snippet:

image.png

Thank you for your reply, the issue has been resolved. SeaLLMs is very meaningful work.

But could specifying eos_token_id lead to premature termination, reduced text diversity, and dependency on training data marking?

Looking forward to your reply.

Perhaps you should change the eos_token_id in generation_config to 151645, instead of the current 151643. I found that after the change, there is no need to specify tokenizer.eos_token_id anymore.

image.png

image.png

SeaLLMs - Language Models for Southeast Asian Languages org

Yes, you can also change the model config, but these two actions should actually be equivalent.

I have revised the generation_config file. Thanks for your feedback!

Sign up or log in to comment