Edit model card

Llama.cpp compatible versions of an original 8B model.

Download one of the versions, for example model-q4_K.gguf.

wget https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf/resolve/main/model-q4_K.gguf

Download interact_llama3_llamacpp.py

wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llama3_llamacpp.py

How to run:

pip install llama-cpp-python fire

python3 interact_llama3_llamacpp.py model-q4_K.gguf

System requirements:

  • 10GB RAM for q8_0 and less for smaller quantizations
Downloads last month
33,401
GGUF
Model size
8.03B params
Architecture
llama
Inference API (serverless) has been turned off for this model.

Dataset used to train IlyaGusev/saiga_llama3_8b_gguf

Spaces using IlyaGusev/saiga_llama3_8b_gguf 2

Collection including IlyaGusev/saiga_llama3_8b_gguf