File size: 1,343 Bytes
6ba02d0
 
 
 
 
 
 
 
 
 
 
 
 
5434741
 
 
6ba02d0
5434741
 
6ba02d0
 
 
 
b874404
6ba02d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: llama2
language:
- en
---

# Euryale-1.4-L2-70B IQ2-GGUF

## Description
IQ2-GGUF quants of [Sao10K/Euryale-1.4-L2-70B](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B)

Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.

***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work. 
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works


[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)


# Models

Models: [IQ2-XS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XXS.gguf)

Regular GGUF Quants: [Here](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B-GGUF)

## Prompt Format

### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Input:
{input}

### Response:

```

## Contact
Kooten on discord