Zoyd's picture
Upload folder using huggingface_hub
29d6680 verified
---
language:
- en
license: llama3
tags:
- moe
model-index:
- name: L3-SnowStorm-v1.15-4x8B-A
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-2_2bpw_exl2)**</center> | <center>7777 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-2_5bpw_exl2)**</center> | <center>8520 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-3_0bpw_exl2)**</center> | <center>9941 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-3_5bpw_exl2)**</center> | <center>11366 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-3_75bpw_exl2)**</center> | <center>12066 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-4_0bpw_exl2)**</center> | <center>12789 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-4_25bpw_exl2)**</center> | <center>13504 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-5_0bpw_exl2)**</center> | <center>15640 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-6_0bpw_exl2)**</center> | <center>18586 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-6_5bpw_exl2)**</center> | <center>20007 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3-SnowStorm-v1.15-4x8B-A-8_0bpw_exl2)**</center> | <center>24101 MB</center> | <center>8</center> |
<style>
.image-container {
position: relative;
display: inline-block;
}
.image-container img {
display: block;
border-radius: 10px;
box-shadow: 0 0 1px rgba(0, 0, 0, 0.3);
}
.image-container::before {
content: "";
position: absolute;
top: 0px;
left: 20px;
width: calc(100% - 40px);
height: calc(100%);
background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/8eG7GxTvcbxyVFQf5GF3C.png");
background-size: cover;
filter: blur(10px);
z-index: -1;
}
</style>
<br>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/8eG7GxTvcbxyVFQf5GF3C.png" style="width: 96%; margin: auto;" >
</div>
> [!NOTE]
> [GGUF](https://huggingface.co/collections/xxx777xxxASD/snowstorm-v115-4x8b-a-665587d3fda461267cfa9d69)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
There's:
- [v1.15A](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A) <- You're here
- [v1.15B](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B)
### Llama 3 SnowStorm v1.15A 4x8B
```
base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: Nitral-AI_Poppy_Porpoise-1.0-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
```
## Models used
- [Nitral-AI/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B)
- [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
- [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
- [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
## Difference(from SnowStorm v1.0)
- Update from [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) to [Nitral-AI/Poppy_Porpoise-0.85-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.85-L3-8B)
## Vision
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png)
## Prompt format: Llama 3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_xxx777xxxASD__L3-SnowStorm-v1.15-4x8B-A)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.68|
|AI2 Reasoning Challenge (25-Shot)|62.20|
|HellaSwag (10-Shot) |81.09|
|MMLU (5-Shot) |67.89|
|TruthfulQA (0-shot) |52.11|
|Winogrande (5-shot) |76.32|
|GSM8k (5-shot) |66.49|