leaderboard-pr-bot commited on
Commit
0b607a4
1 Parent(s): 0eb5394

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -68,3 +68,17 @@ Despite its advanced capabilities, OpenChat is still bound by the limitations in
68
 
69
  **Hallucination of Non-existent Information**
70
  OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  **Hallucination of Non-existent Information**
70
  OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
71
+
72
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
73
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_v2_w)
74
+
75
+ | Metric | Value |
76
+ |-----------------------|---------------------------|
77
+ | Avg. | 47.16 |
78
+ | ARC (25-shot) | 57.34 |
79
+ | HellaSwag (10-shot) | 81.23 |
80
+ | MMLU (5-shot) | 50.17 |
81
+ | TruthfulQA (0-shot) | 50.7 |
82
+ | Winogrande (5-shot) | 75.93 |
83
+ | GSM8K (5-shot) | 8.42 |
84
+ | DROP (3-shot) | 6.35 |