grimulkan's picture
Update README.md
99bc05b verified
|
raw
history blame
No virus
857 Bytes
metadata
license: llama2

This is an interleaved merge of Xwin-longLORA-70b-rope8-32k-fp16 and Euryale-1.3-longLORA-70b-rope8-32k-fp16, using the same merge formula as alpindale's goliath-120b.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

A 6-bit EXL2 quantization is available here.

See this discussion for how the original 70B merges were created with longLORA.