Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -352,3 +352,17 @@ Despite this, we have still worked hard to obtain opening the weights of the mod
352
  Our researchers have no authority to publicly release them without authorization.
353
 
354
  Thank you for your understanding.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
352
  Our researchers have no authority to publicly release them without authorization.
353
 
354
  Thank you for your understanding.
355
+
356
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
357
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-70B-V1.0-GPTQ)
358
+
359
+ | Metric | Value |
360
+ |-----------------------|---------------------------|
361
+ | Avg. | 55.28 |
362
+ | ARC (25-shot) | 63.82 |
363
+ | HellaSwag (10-shot) | 83.85 |
364
+ | MMLU (5-shot) | 63.68 |
365
+ | TruthfulQA (0-shot) | 54.54 |
366
+ | Winogrande (5-shot) | 78.61 |
367
+ | GSM8K (5-shot) | 18.5 |
368
+ | DROP (3-shot) | 23.97 |