UNAversal-2x7B-v1

Merely Phase 1 UNA, only MLP's and its kinda of a beta. The goal was to produce a small but powerful MoE.

This is a 2 MoE model, of 7B each expert. Based on intel-neural series v3.

Tasks Version Filter n-shot Metric Value Stderr
arc_challenge Yaml none 25 acc 0.7133 ± 0.0132
none 25 acc_norm 0.7235 ± 0.0131
arc_easy Yaml none 0 acc 0.8674 ± 0.0070
none 0 acc_norm 0.8291 ± 0.0077
boolq Yaml none 0 acc 0.8768 ± 0.0057
lambada_openai Yaml none 0 perplexity 3.6656 ± 0.0841
none 0 acc 0.7017 ± 0.0064
mathqa Yaml none 0 acc 0.3474 ± 0.0087
none 0 acc_norm 0.3585 ± 0.0088
piqa Yaml none 0 acc 0.8411 ± 0.0085
none 0 acc_norm 0.8526 ± 0.0083
sciq Yaml none 0 acc 0.9600 ± 0.0062
none 0 acc_norm 0.9370 ± 0.0077
Downloads last month
1,220
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.