Text2Text Generation
Transformers
PyTorch
5 languages
t5
flan-ul2
Inference Endpoints
text-generation-inference

Set tokenizer model_max_length to 2048

#10
by joaogante HF staff - opened

As described in the FLAN-UL2 blog, the receptive field of the model was increased from 512 to 2048.

There is also a n_positions in the model config, set to 512, but I can't see its use in transformers πŸ€”

Google org

thanks for fixing!

ybelkada changed pull request status to merged

Sign up or log in to comment