grimulkan commited on
Commit
aab5b78
1 Parent(s): 68b24d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,6 +11,6 @@ There is no additional fine-tuning. The resulting model seems to not be broken..
11
 
12
  You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
13
 
14
- A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-6bpw-h8-exl2).
15
 
16
  See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.
 
11
 
12
  You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
13
 
14
+ A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-6bpw-h8-exl2), and 4 -bit EXL2 [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-4bpw-h6-exl2).
15
 
16
  See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.