ChuckMcSneed's picture
Update README.md
73b875c verified
|
raw
history blame
No virus
1.48 kB
metadata
license: llama2
tags:
  - mergekit
  - merge

This is a 32k version of Sao10K/WinterGoddess-1.4x-70B-L2, extended using method discussed here.

Quants

Thanks for GGUF, @Nexesenex!

Benchmarks

NeoEvalPlusN_benchmark

My meme benchmark.

Test name WinterGoddess WinterGoddess-32k
B 2 2.5
C 1.5 2
D 3 0
S 2.75 1.5
P 5.5 2.25
Total 14.75 8.25

Open LLM leaderboard

Leaderboard on Huggingface

Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Sao10K/WinterGoddess-1.4x-70B-L2 73.23 72.78 90.11 71.12 65.76 85 54.59
ChuckMcSneed/WinterGoddess-1.4x-70b-32k 69.4 71.16 89.12 66.42 63.87 82.56 43.29
Difference 3.83 1.62 0.99 4.7 1.89 2.44 11.3

Here the losses seem far less brutal than on my bench. It seems that extending with LongLORA kills MMLU and GSM8K performance.