π HelpingAI-110M Model Card π
π Datasets used:
- OEvortex/vortex-mini
π£οΈ Language:
- English (en)
π License:
HelpingAI Simplified Universal License (HSUL)
π§ Model Overview: HelpingAI-110M is a very lite version of the HelpingAI model, trained on a 110M parameters.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 29.05 |
AI2 Reasoning Challenge (25-Shot) | 22.78 |
HellaSwag (10-Shot) | 28.02 |
MMLU (5-Shot) | 23.66 |
TruthfulQA (0-shot) | 48.25 |
Winogrande (5-shot) | 51.62 |
GSM8k (5-shot) | 0.00 |
- Downloads last month
- 2,783
Dataset used to train OEvortex/HelpingAI-110M
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard22.780
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard28.020
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard23.660
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard48.250
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard51.620
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.000