Text Generation
Transformers
PyTorch
longllama
code
text-generation-inference
custom_code
Eval Results

blank output

#2
by iadoring - opened

I got a warning showd below,also there are a lot of blank outputs. The length of my in put is about 30k tokens. Is there any configurations that I should set to avoid this?
"""This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (2048). Depending on the model, you may observe exceptions, performance degradation, or nothing at all."""

Sign up or log in to comment