Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -66,4 +66,17 @@ model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-xl_camel-ai-physics"
66
 
67
  # Intended uses, limitations & biases
68
 
69
- You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  # Intended uses, limitations & biases
68
 
69
+ You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
70
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
71
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics)
72
+
73
+ | Metric | Value |
74
+ |-----------------------|---------------------------|
75
+ | Avg. | 29.9 |
76
+ | ARC (25-shot) | 29.52 |
77
+ | HellaSwag (10-shot) | 50.62 |
78
+ | MMLU (5-shot) | 26.79 |
79
+ | TruthfulQA (0-shot) | 39.12 |
80
+ | Winogrande (5-shot) | 57.54 |
81
+ | GSM8K (5-shot) | 0.15 |
82
+ | DROP (3-shot) | 5.57 |