|
--- |
|
license: llama2 |
|
tags: |
|
- merge |
|
- mergekit |
|
--- |
|
|
|
This is a merge of [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [WinterGoddess](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2), both extended to 32k using the method discussed [here](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2). |
|
|
|
Smarter than [Goliath-32k](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16), but has some quirks. |
|
|
|
A bit damaged compared to original [WinterGoliath](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b). |
|
|
|
# Prompt format |
|
Vicuna or Alpaca. |
|
|
|
# Benchmarks |
|
### NeoEvalPlusN_benchmark |
|
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) |
|
|
|
| Test name | WinterGoliath-32k | WinterGoliath | |
|
| ---------- | ---------- | ------- | |
|
| B | 3 | 3 | |
|
| C | 2 | 2 | |
|
| D | 1 | 2 | |
|
| S | 3.75 | 5.5 | |
|
| P | 3.75 | 6 | |
|
| Total | 13.5 | 18.5 | |