leaderboard-pr-bot commited on
Commit
869b4a2
1 Parent(s): 21fb249

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -16,4 +16,17 @@ This is based on llama-2-7b and fine-tuned on two Orca datasets. It took around
16
 
17
  Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
18
 
19
- In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
18
 
19
+ In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b.
20
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
21
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ashercn97__manatee-7b)
22
+
23
+ | Metric | Value |
24
+ |-----------------------|---------------------------|
25
+ | Avg. | 45.29 |
26
+ | ARC (25-shot) | 54.52 |
27
+ | HellaSwag (10-shot) | 78.95 |
28
+ | MMLU (5-shot) | 49.26 |
29
+ | TruthfulQA (0-shot) | 46.77 |
30
+ | Winogrande (5-shot) | 74.51 |
31
+ | GSM8K (5-shot) | 7.05 |
32
+ | DROP (3-shot) | 5.99 |