This is an experimental model to make an MoE of Nous Hermes 2 Yi 34B as Mixture of Expert.

The base model is Yi-34B.

All credits belong to NousResearch for fine tuned Yi model, 01-AI for Yi model, and Charles O. Goddard for the 'mergekit'.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.30
AI2 Reasoning Challenge (25-Shot) 66.64
HellaSwag (10-Shot) 85.73
MMLU (5-Shot) 76.49
TruthfulQA (0-shot) 58.08
Winogrande (5-shot) 83.35
GSM8k (5-shot) 69.52
Downloads last month
1,075
Safetensors
Model size
60.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibndias/Nous-Hermes-2-MoE-2x34B

Quantizations
3 models

Collection including ibndias/Nous-Hermes-2-MoE-2x34B

Evaluation results