File size: 1,332 Bytes
229b53f
 
b63b9f6
 
 
 
66cb2d1
 
b63b9f6
 
 
0c66e8d
 
f82e01a
 
ae63249
f82e01a
ea66541
d4da919
563ba4b
ea66541
 
 
 
 
 
 
efb9a43
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: apache-2.0
language:
- el
- en
model_creator: ilsp
base_model: ilsp/Meltemi-7B-Instruct-v1 
library_name: gguf
prompt_template: |
  [INST] {prompt} [/INST]
quantized_by: ilsp
---

# Meltemi 7B Instruct Quantized models

![image/png](https://miro.medium.com/v2/resize:fit:720/format:webp/1*IaE7RJk6JffW8og-MOnYCA.png)

## Description

In this repository you can find quantised GGUF variants of [Meltemi-7B-Instruct-v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1) model, created using [llama.cpp](https://github.com/ggerganov/llama.cpp) at the [Institute for Language and Speech Processing](https://www.athenarc.gr/en/ilsp) of [Athena Research & Innovation Center](https://www.athenarc.gr/en).

## Provided files (Use case column taken from the llama.cpp documentation)

Based on the information 

| Name | Quant method | Bits | Size | Appr. RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [meltemi-instruct-v1_q3_K_M.bin](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-instruct-v1_q3_K_M.bin) | Q3_K_M | 3 | 3.67 GB| 6.45 GB | small, high quality loss |
| [meltemi-instruct-v1_q5_K_M.bin](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-instruct-v1_q5_K_M.bin) | Q5_K_M | 5 | 5.31 GB| 8.1 GB | large, low quality loss - recommended |