grimulkan commited on
Commit
99bc05b
1 Parent(s): 20391b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: llama2
3
  ---
4
 
5
- This is an interleaved merge of [Xwin-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Xwin-longLORA-70b-rope8-32k-fp16) and [Euryale-1.3-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), using the same merge formula as alpindale's [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
6
 
7
  There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
8
 
 
2
  license: llama2
3
  ---
4
 
5
+ This is an interleaved merge of [Xwin-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Xwin-longLORA-70b-rope8-32k-fp16) and [Euryale-1.3-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16), using the same merge formula as alpindale's [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
6
 
7
  There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
8