Output is too long

#1
by osanseviero HF staff - opened

The output is huge but also it doesn't make any sense (it's gibberish). Is there some bug in the model weights or the inference code?

Keras org

I limited the output to 40 tokens. I changed the tokenization of the prompt to fit on trained vocabulary using "fit_on_texts". Previously, the tokenization was done using a Bert tokenizer, which performed badly.
But, the output is still not fluent. What shall we do to improve it?

Thanks

Jezia changed discussion status to closed
Jezia changed discussion status to open

Sign up or log in to comment