Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -73,3 +73,17 @@ If you find this project useful in your research, please consider citing:
73
  - The perplexity evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention).
74
  - We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  - The perplexity evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention).
74
  - We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
75
 
76
+
77
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
78
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yukang__Llama-2-13b-longlora-16k-ft)
79
+
80
+ | Metric | Value |
81
+ |-----------------------|---------------------------|
82
+ | Avg. | 25.0 |
83
+ | ARC (25-shot) | 25.85 |
84
+ | HellaSwag (10-shot) | 27.6 |
85
+ | MMLU (5-shot) | 23.1 |
86
+ | TruthfulQA (0-shot) | 48.89 |
87
+ | Winogrande (5-shot) | 49.57 |
88
+ | GSM8K (5-shot) | 0.0 |
89
+ | DROP (3-shot) | 0.0 |