the model can't generate any outputs

#5
by J22 - opened

Run below code, the model can't generate any outputs:

    from transformers import AutoTokenizer, PersimmonForCausalLM

    # init model and tokenizer
    tokenizer = AutoTokenizer.from_pretrained(model_path)
    prompt = "human: Hey, what should I eat for dinner?\n\nadept: "
    inputs = tokenizer(prompt, return_tensors="pt")

    model = PersimmonForCausalLM.from_pretrained(model_path)
    device = 'cpu'  # if no GPU, set "cpu"
    model.to(device)

    dumpModule = DumpModule(dumped_module=model)
    dumpModule.init_dump()

    # Generate
    generate_ids = model.generate(inputs.input_ids, max_length=30)
    r = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)

    print(r)

Output:

['human: Hey, what should I eat for dinner?\n\nadept: \u200b']

Solution: bos_token_id should be 1.

tokenizer = AutoTokenizer.from_pretrained(model_path, bos_token="<s>") should fix it. I opened #7

Sign up or log in to comment