Edit model card

FusionNet_7Bx2_MoE_v0.1

Fine-tuned model on English language using MoE method. The improved version from FusionNet_7Bx2_MoE_14B.

Model description

The FusionNet_7Bx2_MoE_v0.1 is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet has 12.9B parameters, and this model is fine-tuned. Enjoy!

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.16
AI2 Reasoning Challenge (25-Shot) 74.06
HellaSwag (10-Shot) 88.90
MMLU (5-Shot) 65.00
TruthfulQA (0-shot) 71.20
Winogrande (5-shot) 87.53
GSM8k (5-shot) 70.28
Downloads last month
331
Safetensors
Model size
12.9B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results