Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -17,3 +17,17 @@ response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_
17
  response = response.lstrip(prompt)
18
  ```
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  response = response.lstrip(prompt)
18
  ```
19
 
20
+
21
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
22
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf)
23
+
24
+ | Metric | Value |
25
+ |-----------------------|---------------------------|
26
+ | Avg. | 42.44 |
27
+ | ARC (25-shot) | 48.04 |
28
+ | HellaSwag (10-shot) | 73.25 |
29
+ | MMLU (5-shot) | 35.04 |
30
+ | TruthfulQA (0-shot) | 39.92 |
31
+ | Winogrande (5-shot) | 70.17 |
32
+ | GSM8K (5-shot) | 6.22 |
33
+ | DROP (3-shot) | 24.46 |