Edit model card

lee-ite/Llama-3.1-8B-PHQ

This LoRA was converted to GGUF format from lee-ite/Llama-3.1-8B-PHQ-lora using llama.cpp. The base Model is meta-llama/Meta-Llama-3.1-8B-Instruct.

Use with llama.cpp

You need to merge the LoRA-GGUF into the Base-Model use llama.cpp.

Downloads last month
0
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Unable to determine this model's library. Check the docs .

Quantized from