Text Generation
Transformers
PyTorch
Safetensors
English
llama
text-generation-inference
Inference Endpoints

What's the ram requirement to run this locally?

#8
by hk2257853 - opened

Awesome work on this model!
Ran through Google collab, works pretty well.

I was curious to know it's ram requirements. I am looking for light weight llm that I can run locally.

Thank you.

Sign up or log in to comment