--- license: llama3 tags: - moe language: - en --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |
**[2.2](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-2_2bpw_exl2)**
|
7777 MB
|
6
| |
**[2.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-2_5bpw_exl2)**
|
8519 MB
|
6
| |
**[3.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-3_0bpw_exl2)**
|
9944 MB
|
6
| |
**[3.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-3_5bpw_exl2)**
|
11365 MB
|
6
| |
**[3.75](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-3_75bpw_exl2)**
|
12080 MB
|
6
| |
**[4.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-4_0bpw_exl2)**
|
12789 MB
|
6
| |
**[4.25](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-4_25bpw_exl2)**
|
13503 MB
|
6
| |
**[5.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-5_0bpw_exl2)**
|
15632 MB
|
6
| |
**[6.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-6_0bpw_exl2)**
|
18594 MB
|
8
| |
**[6.5](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-6_5bpw_exl2)**
|
19969 MB
|
8
| |
**[8.0](https://huggingface.co/Zoyd/xxx777xxxASD_L3_SnowStorm_4x8B-8_0bpw_exl2)**
|
24115 MB
|
8
|
(Maybe i'll change the waifu picture later) > [!NOTE] > [GGUF/Exl2 quants](https://huggingface.co/collections/xxx777xxxASD/snowstorm-4x8b-664b52a1d2a12e515efb5680) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks. ### Llama 3 SnowStorm 4x8B ``` base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS - source_model: openlynn_Llama-3-Soliloquy-8B-v2 - source_model: Sao10K_L3-8B-Stheno-v3.1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) - [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Difference(from ChaoticSoliloquy v1.5) - Update from [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) to [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - Update from [openlynn/Llama-3-Soliloquy-8B-v1](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v1) to [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) - Update from [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) to [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3