Text Generation
Transformers
PyTorch
gptj
Inference Endpoints

Empty responses / Zero new tokens generated as output

#6
by sanchimittal - opened

Hello, I am using the Instruct-GPT-J model with <|endoftext|> as eos_token as well as bos_token.
With many input texts I'm getting only <|endoftext|> as output or you could say empty response, as model generates only the input + eos_token. My generate function looks like this:

gen_tokens = model.generate(
        input_ids,
        do_sample=True,
        bos_token_id = 50256,
        eos_token_id = 50256,
        temperature=0.7,
        min_new_tokens = 5,
        max_new_tokens = 2048,
    )
gen_text = tokenizer.batch_decode(gen_tokens)[0]

<|endoftext|> has token id 50256 for the tokenizer.
Tokenizer and model initialization:

tokenizer = AutoTokenizer.from_pretrained("../weights/instruct-gpt-j-fp16/", bos_token='<|endoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>')
model = AutoModelForCausalLM.from_pretrained("../weights/instruct-gpt-j-fp16/").cuda()
model.resize_token_embeddings(len(tokenizer))

Sometimes it generates good enough output text as well, but like 2/10 times. Can someone please help with why is this issue happening and what can I try to resolve?

Sign up or log in to comment