Adding Evaluation Results
#5
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -44,4 +44,17 @@ The primary intended users of the model are researchers in natural language proc
|
|
44 |
18K conversations collected from ShareGPT.com.
|
45 |
|
46 |
## Evaluation dataset
|
47 |
-
A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
18K conversations collected from ShareGPT.com.
|
45 |
|
46 |
## Evaluation dataset
|
47 |
+
A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat).
|
48 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
49 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__longchat-13b-16k)
|
50 |
+
|
51 |
+
| Metric | Value |
|
52 |
+
|-----------------------|---------------------------|
|
53 |
+
| Avg. | 46.32 |
|
54 |
+
| ARC (25-shot) | 53.58 |
|
55 |
+
| HellaSwag (10-shot) | 77.67 |
|
56 |
+
| MMLU (5-shot) | 45.24 |
|
57 |
+
| TruthfulQA (0-shot) | 47.07 |
|
58 |
+
| Winogrande (5-shot) | 70.09 |
|
59 |
+
| GSM8K (5-shot) | 4.17 |
|
60 |
+
| DROP (3-shot) | 26.42 |
|