Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -47,3 +47,17 @@ If a question does not make any sense, or is not factually coherent, explain why
47
 
48
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64eb1e4a55e4f0ecb9c4f406/PsbTFlswJexLuwrJYtvly.png)
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64eb1e4a55e4f0ecb9c4f406/PsbTFlswJexLuwrJYtvly.png)
49
 
50
+
51
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
52
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vibhorag101__llama-2-7b-chat-hf-phr_mental_health-2048)
53
+
54
+ | Metric | Value |
55
+ |-----------------------|---------------------------|
56
+ | Avg. | 42.84 |
57
+ | ARC (25-shot) | 52.39 |
58
+ | HellaSwag (10-shot) | 75.39 |
59
+ | MMLU (5-shot) | 39.77 |
60
+ | TruthfulQA (0-shot) | 42.89 |
61
+ | Winogrande (5-shot) | 71.19 |
62
+ | GSM8K (5-shot) | 5.91 |
63
+ | DROP (3-shot) | 12.3 |