File size: 1,210 Bytes
e5f49b3
 
 
b4bac8f
 
 
 
68b24d7
2a85638
b4bac8f
68b24d7
b4bac8f
 
aab5b78
b4bac8f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
license: cc-by-nc-2.0
---

This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into lizpreciatior's [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and removing the extra row and pad token so that the vocabularies match.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

[ChuckMcSneed](https://huggingface.co/ChuckMcSneed) did a benchmark [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-fp16/discussions/2), indicating 30% degradation with 8x the context length.


You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).

A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-6bpw-h8-exl2), and 4 -bit EXL2 [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-4bpw-h6-exl2).

See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.