Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -16,4 +16,17 @@ This is based on llama-2-7b and fine-tuned on two Orca datasets. It took around
|
|
16 |
|
17 |
Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
|
18 |
|
19 |
-
In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
|
18 |
|
19 |
+
In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b.
|
20 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
21 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ashercn97__manatee-7b)
|
22 |
+
|
23 |
+
| Metric | Value |
|
24 |
+
|-----------------------|---------------------------|
|
25 |
+
| Avg. | 45.29 |
|
26 |
+
| ARC (25-shot) | 54.52 |
|
27 |
+
| HellaSwag (10-shot) | 78.95 |
|
28 |
+
| MMLU (5-shot) | 49.26 |
|
29 |
+
| TruthfulQA (0-shot) | 46.77 |
|
30 |
+
| Winogrande (5-shot) | 74.51 |
|
31 |
+
| GSM8K (5-shot) | 7.05 |
|
32 |
+
| DROP (3-shot) | 5.99 |
|