metadata
license: apache-2.0
language:
- el
- en
model_creator: ilsp
base_model: ilsp/Meltemi-7B-Instruct-v1
library_name: gguf
prompt_template: |
[INST] {prompt} [/INST]
quantized_by: ilsp
Meltemi 7B Instruct Quantized models
Description
In this repository you can find quantised GGUF variants of Meltemi-7B-Instruct-v1 model, created using llama.cpp at the Institute for Language and Speech Processing of Athena Research & Innovation Center.
Provided files (Use case column taken from the llama.cpp documentation)
Based on the information
Name | Quant method | Bits | Size | Appr. RAM required | Use case |
---|---|---|---|---|---|
meltemi-instruct-v1_q3_K_M.bin | Q3_K_M | 3 | 3.67 GB | 6.45 GB | small, high quality loss |
meltemi-instruct-v1_q5_K_M.bin | Q5_K_M | 5 | 5.31 GB | 8.1 GB | large, low quality loss - recommended |