Edit model card

This is an interleaved merge of Xwin-longLORA-70b-rope8-32k-fp16 and Euryale-1.3-longLORA-70b-rope8-32k-fp16, using the same merge formula as alpindale's goliath-120b.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

ChuckMcSneed did a benchmark here, indicating 30% degradation with 8x the context length.

A 6-bit EXL2 quantization is available here. More EXL2 quants here, thanks to aikitoria.

See this discussion for how the original 70B merges were created with longLORA.

Downloads last month
4
Safetensors
Model size
118B params
Tensor type
BF16
·