Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Exllamav2 quant (exl2 / 2.5 bpw) made with ExLlamaV2 v0.1.1

Other EXL2 quants:

Quant Model Size lm_head
2.2
7777 MB
6
2.5
8520 MB
6
3.0
9941 MB
6
3.5
11366 MB
6
3.75
12066 MB
6
4.0
12789 MB
6
4.25
13504 MB
6
5.0
15640 MB
6
6.0
18586 MB
8
6.5
20007 MB
8
8.0
24101 MB
8

GGUF

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.

There's:

Llama 3 SnowStorm v1.15A 4x8B

base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: Nitral-AI_Poppy_Porpoise-1.0-L3-8B
  - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
  - source_model: openlynn_Llama-3-Soliloquy-8B-v2
  - source_model: Sao10K_L3-8B-Stheno-v3.1

Models used

Difference(from SnowStorm v1.0)

Vision

llama3_mmproj

image/png

Prompt format: Llama 3

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.68
AI2 Reasoning Challenge (25-Shot) 62.20
HellaSwag (10-Shot) 81.09
MMLU (5-Shot) 67.89
TruthfulQA (0-shot) 52.11
Winogrande (5-shot) 76.32
GSM8k (5-shot) 66.49
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results