Text2Text Generation
Transformers
Safetensors
101 languages
t5
Inference Endpoints
text-generation-inference

Freezing at AutoModelForSeq2SeqLM function call

#20
by mrwalters - opened

I am running the sample code given on the model card tab. However, whenever I run it freezes at this line and then stops without any exceptions or errors. Is there another way I can load the model? Or could there be something wrong with my system configuration?

Stops at this line:
aya_model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)

Probably the same issue as on here: https://huggingface.co/CohereForAI/aya-101/discussions/7

The issue is with a lack of RAM memory to load the 46 GB model.

Sign up or log in to comment