Edit model card

una-neural-chat-v3-3-phase2

OMA, OneManArmy proudly presents, una-neural-chat-v3-3 PHASE 2. Powered by UNA (Uniform Neural Alignment), using zephyr trainer, allenai/ultrafeedback cleaned.. and JUST THAT. Outperforming its base model, not adding any data.. just UNA Algorythm on Transformers Lib. UNA Settings:

  • MLP : 0.05
  • ATT : 0.03
  • LNOR : 0.02

Framework versions

  • Transformers 4.35.0-UNA
  • Pytorch 2.1.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.72
AI2 Reasoning Challenge (25-Shot) 67.32
HellaSwag (10-Shot) 86.33
MMLU (5-Shot) 63.14
TruthfulQA (0-shot) 65.49
Winogrande (5-shot) 79.79
GSM8k (5-shot) 62.24
Downloads last month
1,839
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Finetuned from

Dataset used to train one-man-army/una-neural-chat-v3-3-P2-OMA