Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -2,3 +2,17 @@
2
  language: en
3
  ---
4
  This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  language: en
3
  ---
4
  This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
5
+
6
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
7
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-6.7B)
8
+
9
+ | Metric | Value |
10
+ |-----------------------|---------------------------|
11
+ | Avg. | 36.09 |
12
+ | ARC (25-shot) | 39.42 |
13
+ | HellaSwag (10-shot) | 71.26 |
14
+ | MMLU (5-shot) | 26.91 |
15
+ | TruthfulQA (0-shot) | 32.73 |
16
+ | Winogrande (5-shot) | 65.27 |
17
+ | GSM8K (5-shot) | 0.0 |
18
+ | DROP (3-shot) | 17.05 |