File size: 3,904 Bytes
3864635
e4964ba
d37f3b7
 
3864635
 
d37f3b7
 
 
 
3864635
8614f6c
319871b
8614f6c
 
43ee867
 
8614f6c
 
 
 
 
 
319871b
 
c0885f7
 
319871b
 
6c25848
 
fe0d59f
6c25848
fe0d59f
 
6c25848
 
fe0d59f
fdac9ce
 
6c25848
 
 
 
319871b
048ff9a
 
c0885f7
048ff9a
c0885f7
a2e7733
 
 
e4964ba
 
 
 
 
1969aa7
 
 
 
 
 
319871b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
base_model: Doctor-Shotgun/lzlv-limarpv3-l2-70b
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: mradermacher
tags:
- llama
- llama 2
---
## About

static quantize of https://huggingface.co/Doctor-Shotgun/lzlv-limarpv3-l2-70b

<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-i1-GGUF
## Usage

If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.

## Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q2_K.gguf) | Q2_K | 25.6 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q3_K_XS.gguf) | Q3_K_XS | 28.4 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.IQ3_M.gguf) | IQ3_M | 31.0 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 |  |
| [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 |  |
| [PART 1](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q6_K.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q6_K.gguf.split-ab) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q8_0.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-GGUF/resolve/main/lzlv-limarpv3-l2-70b.Q8_0.gguf.split-ab) | Q8_0 | 73.4 | fast, best quality |

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

## FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

## Thanks

I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.

<!-- end -->