metadata
license: llama3
tags:
- moe
language:
- en
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
Llama 3 SnowStorm v1.15B 4x8B
base_model: Sao10K_L3-8B-Stheno-v3.1
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: Nitral-AI_Poppy_Porpoise-0.85-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
Models used
- ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- openlynn/Llama-3-Soliloquy-8B-v2
- Sao10K/L3-8B-Stheno-v3.1
Difference(from SnowStorm v1.0)
- Update from ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B to Nitral-AI/Poppy_Porpoise-0.85-L3-8B
- Change base model from NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS to Sao10K/L3-8B-Stheno-v3.1