Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -38,3 +38,17 @@ Github:[**Llama2-Chinese**](https://github.com/FlagAlpha/Llama2-Chinese)
38
  - Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
39
  - [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  - Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
39
  - [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
40
 
41
+
42
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
43
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FlagAlpha__Llama2-Chinese-7b-Chat)
44
+
45
+ | Metric | Value |
46
+ |-----------------------|---------------------------|
47
+ | Avg. | 48.62 |
48
+ | ARC (25-shot) | 52.39 |
49
+ | HellaSwag (10-shot) | 77.52 |
50
+ | MMLU (5-shot) | 47.72 |
51
+ | TruthfulQA (0-shot) | 46.87 |
52
+ | Winogrande (5-shot) | 74.27 |
53
+ | GSM8K (5-shot) | 8.04 |
54
+ | DROP (3-shot) | 33.53 |