How to use this model ?
#23
by
Kevy
- opened
Hello everyone !
Can someone explain how to deploy this model ?
Is it possible with llama.cpp ? I'd like to use CPU instead of GPU.
Thanks !
It is explained well here:
https://www.youtube.com/watch?v=cCQdzqAHcFk
The llama cpu install like that worked for me.
I'm on Linux, not windows :/
just like llama all the thing is compatible
https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models