firefly-bloom-7b1 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
442347d
|
raw
history blame
815 Bytes

该模型使用bloom-7b1,使用百万中英文指令数据,进行指令微调。

更多详情见Firefly项目

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.99
ARC (25-shot) 40.44
HellaSwag (10-shot) 61.2
MMLU (5-shot) 26.83
TruthfulQA (0-shot) 40.83
Winogrande (5-shot) 64.56
GSM8K (5-shot) 0.68
DROP (3-shot) 10.37