Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -42,3 +42,17 @@ You could also alternatively launch a CLI demo by using the script in [LLaMA-Fac
42
  ```bash
43
  python src/cli_demo.py --template baichuan2 --model_name_or_path hiyouga/Baichuan2-7B-Chat-LLaMAfied
44
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```bash
43
  python src/cli_demo.py --template baichuan2 --model_name_or_path hiyouga/Baichuan2-7B-Chat-LLaMAfied
44
  ```
45
+
46
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
47
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied)
48
+
49
+ | Metric | Value |
50
+ |-----------------------|---------------------------|
51
+ | Avg. | 47.92 |
52
+ | ARC (25-shot) | 52.47 |
53
+ | HellaSwag (10-shot) | 74.04 |
54
+ | MMLU (5-shot) | 53.88 |
55
+ | TruthfulQA (0-shot) | 48.04 |
56
+ | Winogrande (5-shot) | 69.14 |
57
+ | GSM8K (5-shot) | 10.92 |
58
+ | DROP (3-shot) | 26.94 |