NanoByte's picture
Update README.md
804ffca verified
|
raw
history blame
1.28 kB
metadata
base_model: grimulkan/lzlv-longLORA-70b-rope8-32k-fp16
inference: false
license: cc-by-nc-2.0
model_creator: Grimulkan
model_name: lzlv longLORA 70b rope8 32k
model_type: llama

lzlv longLORA 70b rope8 32k - GGUF

Description

This repo contains GGUF format model files for Grimulkan's lzlv longLORA 70b rope8 32k.

Provided files

Name Quant method Bits Size
lzlv-longLORA-70b-rope8-32k.Q2_K.gguf Q2_K 2 25.46 GB
lzlv-longLORA-70b-rope8-32k.Q3_K_M.gguf Q3_K_M 3 33.27 GB
lzlv-longLORA-70b-rope8-32k.Q4_K_S.gguf Q4_K_S 4 39.25 GB
lzlv-longLORA-70b-rope8-32k.Q4_K_M.gguf Q4_K_M 4 41.42 GB
lzlv-longLORA-70b-rope8-32k.Q5_K_S.gguf Q5_K_S 5 47.46 GB
lzlv-longLORA-70b-rope8-32k.Q5_K_M.gguf Q5_K_M 5 48.75 GB
lzlv-longLORA-70b-rope8-32k.Q6_K.gguf Q6_K 6 56.59 GB
lzlv-longLORA-70b-rope8-32k.Q8_0.gguf Q8_0 8 73.29 GB

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.