Adding Evaluation Results

#11
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -41,4 +41,17 @@ See more details in the "Training Details of Vicuna Models" section in the appen
41
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
42
 
43
  ## Difference between different versions of Vicuna
44
- See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
42
 
43
  ## Difference between different versions of Vicuna
44
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
45
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
46
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-33b-v1.3)
47
+
48
+ | Metric | Value |
49
+ |-----------------------|---------------------------|
50
+ | Avg. | 54.74 |
51
+ | ARC (25-shot) | 62.12 |
52
+ | HellaSwag (10-shot) | 83.0 |
53
+ | MMLU (5-shot) | 59.22 |
54
+ | TruthfulQA (0-shot) | 56.16 |
55
+ | Winogrande (5-shot) | 77.03 |
56
+ | GSM8K (5-shot) | 13.72 |
57
+ | DROP (3-shot) | 31.92 |