File size: 1,195 Bytes
6a8e0cc
f077f93
 
6a8e0cc
f077f93
 
 
6a8e0cc
f077f93
 
 
736e3a5
 
f077f93
 
 
 
 
 
 
 
0a8d43f
 
9410cce
f077f93
 
 
 
9410cce
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
base_model: grimulkan/lzlv-longLORA-70b-rope8-32k-fp16
inference: false
license: cc-by-nc-2.0
model_creator: Grimulkan
model_name: lzlv longLORA 70b rope8 32k
model_type: llama
---

# lzlv longLORA 70b rope8 32k - GGUF

- Model creator: [Grimulkan](https://huggingface.co/grimulkan)
- Original model: [lzlv longLORA 70b rope8 32k](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-fp16)

## Description

This repo contains GGUF format model files for [Grimulkan's lzlv longLORA 70b rope8 32k](https://huggingface.co/grimulkan).

## Provided files
| Name | Quant method | Bits | Size |
| ---- | ---- | ---- | ---- |
| lzlv_70b_fp16_hf.Q2_K.gguf | Q2_K | 2 | 25.46 GB|
| lzlv_70b_fp16_hf.Q3_K_M.gguf| Q3_K_M | 3 | 33.27 GB|
| lzlv_70b_fp16_hf.Q4_K_S.gguf | Q4_K_S | 4 | 39.25 GB|
| lzlv_70b_fp16_hf.Q4_K_M.gguf | Q4_K_M | 4 | 41.42 GB| 
| lzlv_70b_fp16_hf.Q5_K_S.gguf | Q5_K_S | 5 | 47.46 GB| 
| lzlv_70b_fp16_hf.Q5_K_M.gguf | Q5_K_M | 5 | 48.75 GB|
| lzlv_70b_fp16_hf.Q6_K.gguf | Q6_K | 6 | 56.59 GB |
| lzlv_70b_fp16_hf.Q8_0.gguf | Q8_0 | 8 | 73.29 GB |

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.