Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -68,3 +68,17 @@ model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-dolly")
68
 
69
  You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
70
 
71
+
72
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
73
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2-dolly)
74
+
75
+ | Metric | Value |
76
+ |-----------------------|---------------------------|
77
+ | Avg. | 25.53 |
78
+ | ARC (25-shot) | 22.7 |
79
+ | HellaSwag (10-shot) | 30.15 |
80
+ | MMLU (5-shot) | 25.81 |
81
+ | TruthfulQA (0-shot) | 44.97 |
82
+ | Winogrande (5-shot) | 51.46 |
83
+ | GSM8K (5-shot) | 0.15 |
84
+ | DROP (3-shot) | 3.45 |