Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -46,4 +46,17 @@ See more details in the "Training Details of Vicuna Models" section in the appen
46
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
47
 
48
  ## Difference between different versions of Vicuna
49
- See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
47
 
48
  ## Difference between different versions of Vicuna
49
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
50
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
51
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3)
52
+
53
+ | Metric | Value |
54
+ |-----------------------|---------------------------|
55
+ | Avg. | 47.69 |
56
+ | ARC (25-shot) | 54.61 |
57
+ | HellaSwag (10-shot) | 80.41 |
58
+ | MMLU (5-shot) | 52.88 |
59
+ | TruthfulQA (0-shot) | 52.14 |
60
+ | Winogrande (5-shot) | 74.82 |
61
+ | GSM8K (5-shot) | 10.77 |
62
+ | DROP (3-shot) | 8.24 |