|
--- |
|
license: llama2 |
|
tags: |
|
- mergekit |
|
- merge |
|
--- |
|
|
|
This is a 32k version of Sao10K/WinterGoddess-1.4x-70B-L2, extended using method discussed [here](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2). |
|
|
|
# Quants |
|
Thanks for GGUF, [@Nexesenex](https://huggingface.co/Nexesenex)! |
|
- [GGUF](https://huggingface.co/Nexesenex/ChuckMcSneed_WinterGoddess-1.4x-70b-32k-iMat.GGUF) |
|
|
|
|
|
# Benchmarks |
|
### NeoEvalPlusN_benchmark |
|
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) |
|
|
|
| Test name | WinterGoddess | WinterGoddess-32k | |
|
| ---------- | ---------- | ------- | |
|
| B | 2 | 2.5 | |
|
| C | 1.5 | 2 | |
|
| D | 3 | 0 | |
|
| S | 2.75 | 1.5 | |
|
| P | 5.5 | 2.25 | |
|
| Total | 14.75 | 8.25 | |
|
|
|
### Open LLM leaderboard |
|
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K| |
|
|---------------------------------------|-------|-----|---------|-----|----------|----------|-----| |
|
|Sao10K/WinterGoddess-1.4x-70B-L2 |73.23 |72.78|90.11 |71.12|65.76 |85 |54.59| |
|
|ChuckMcSneed/WinterGoddess-1.4x-70b-32k|69.4 |71.16|89.12 |66.42|63.87 |82.56 |43.29| |
|
|Difference |3.83 |1.62 |0.99 |4.7 |1.89 |2.44 |11.3 | |
|
|
|
Here the losses seem far less brutal than on my bench. It seems that extending with LongLORA kills MMLU and GSM8K performance. |