|
--- |
|
license: llama2 |
|
--- |
|
|
|
This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into Xwin-LM's [Xwin-LM-70B-V0.1](https://huggingface.co/royallab/Aetheria-L2-70B), replacing the embed and norm layers as described in the [LongLoRA repo](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), and removing the extra row and pad token so that the vocabularies match. |
|
|
|
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8). |
|
|
|
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)). |
|
|
|
See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these. |