Minimum Requirements for running the model in CUDA

#4
by Rafimc13 - opened

Hello Guys,
Congratulations for the great work.
Could you please share the minimum requirements that Meltemi needs in order to run in GPU with cuda?
For example a dedicated memory of 4gb (NVIDIA) will be able to run the model efficiently?
Moreover, with a simple usage with CPU will the model run in the inference mode or is not possible?
Thank you in advance.

Rafail

Institute for Language and Speech Processing org

Hello,
In general if you want to run inference at half precision you require ~16GB VRAM
To run inference at lower VRAM values you'll either have to quantize the model yourself or try either of our AWQ or GGUF versions. Your best bet for CPU usage would be GGUF

Thank you for the quick response. I will try the GGUF version.

I was waiting for a greek LLM for a very long time. Congratulations. Sorry for my ignorance, but can this model be used for Retrieval Augmented Generation?

Institute for Language and Speech Processing org
edited Apr 17

As any other model, it can be used as a "generator" in the RAG process.
Having said that for the storing and retrieval purposes you would have to pick suitable embeddings for Greek which might not be the case for all embedding algos out there, so it might require some trial and error.

LVouk changed discussion status to closed

Sign up or log in to comment