license: apache-2.0 | |
llama.cpp [5921b8f](https://github.com/ggerganov/llama.cpp/commit/5921b8f089d3b7bda86aac5a66825df6a6c10603) revision is used for the conversion. | |
This model is a GGUF version of the [tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0](https://huggingface.co/tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0), Turkish instruction fine-tuned Llama-3-8b model. | |
Currently, only Q8_0 quantization is available. | |