File size: 1,478 Bytes
4f00985
 
c5c433f
 
 
4f00985
c5c433f
136d8ad
 
73b875c
 
 
 
 
136d8ad
 
 
 
 
 
 
 
 
 
 
abf42d5
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: llama2
tags:
- mergekit
- merge
---

This is a 32k version of Sao10K/WinterGoddess-1.4x-70B-L2, extended using method discussed [here](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2).

# Quants
Thanks for GGUF, [@Nexesenex](https://huggingface.co/Nexesenex)!
- [GGUF](https://huggingface.co/Nexesenex/ChuckMcSneed_WinterGoddess-1.4x-70b-32k-iMat.GGUF)


# Benchmarks
### NeoEvalPlusN_benchmark
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark)

| Test name  | WinterGoddess | WinterGoddess-32k |
| ---------- | ---------- | -------  |
| B | 2 | 2.5 |
| C | 1.5 | 2 |
| D | 3 | 0 |
| S | 2.75 | 1.5 |
| P | 5.5 | 2.25 |
| Total | 14.75 | 8.25 |

### Open LLM leaderboard
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model                                  |Average|ARC  |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|Sao10K/WinterGoddess-1.4x-70B-L2       |73.23  |72.78|90.11    |71.12|65.76     |85        |54.59|
|ChuckMcSneed/WinterGoddess-1.4x-70b-32k|69.4   |71.16|89.12    |66.42|63.87     |82.56     |43.29|
|Difference                             |3.83   |1.62 |0.99     |4.7  |1.89      |2.44      |11.3 |

Here the losses seem far less brutal than on my bench. It seems that extending with LongLORA kills MMLU and GSM8K performance.