leaderboard-pr-bot commited on
Commit
974daa0
1 Parent(s): 5416000

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -64,4 +64,17 @@ Risks and harms of large language models include the generation of harmful, offe
64
 
65
  **Use cases**
66
 
67
- KoRWKV is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  **Use cases**
66
 
67
+ KoRWKV is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
68
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
69
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoRWKV-6B)
70
+
71
+ | Metric | Value |
72
+ |-----------------------|---------------------------|
73
+ | Avg. | 25.0 |
74
+ | ARC (25-shot) | 22.1 |
75
+ | HellaSwag (10-shot) | 32.18 |
76
+ | MMLU (5-shot) | 24.69 |
77
+ | TruthfulQA (0-shot) | 39.05 |
78
+ | Winogrande (5-shot) | 51.14 |
79
+ | GSM8K (5-shot) | 0.0 |
80
+ | DROP (3-shot) | 5.83 |