Anonimus12345678902 commited on
Commit
c1cded6
1 Parent(s): 17c61f9

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,6 +1,109 @@
1
  ---
2
- inference: false
3
  license: llama2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Vicuna Model Card
@@ -45,4 +148,17 @@ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-jud
45
 
46
  ## Difference between different versions of Vicuna
47
 
48
- See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: llama2
3
+ inference: false
4
+ model-index:
5
+ - name: vicuna-13b-v1.5-16k
6
+ results:
7
+ - task:
8
+ type: text-generation
9
+ name: Text Generation
10
+ dataset:
11
+ name: AI2 Reasoning Challenge (25-Shot)
12
+ type: ai2_arc
13
+ config: ARC-Challenge
14
+ split: test
15
+ args:
16
+ num_few_shot: 25
17
+ metrics:
18
+ - type: acc_norm
19
+ value: 56.74
20
+ name: normalized accuracy
21
+ source:
22
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
23
+ name: Open LLM Leaderboard
24
+ - task:
25
+ type: text-generation
26
+ name: Text Generation
27
+ dataset:
28
+ name: HellaSwag (10-Shot)
29
+ type: hellaswag
30
+ split: validation
31
+ args:
32
+ num_few_shot: 10
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 80.37
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: MMLU (5-Shot)
45
+ type: cais/mmlu
46
+ config: all
47
+ split: test
48
+ args:
49
+ num_few_shot: 5
50
+ metrics:
51
+ - type: acc
52
+ value: 55.28
53
+ name: accuracy
54
+ source:
55
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
56
+ name: Open LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: TruthfulQA (0-shot)
62
+ type: truthful_qa
63
+ config: multiple_choice
64
+ split: validation
65
+ args:
66
+ num_few_shot: 0
67
+ metrics:
68
+ - type: mc2
69
+ value: 51.96
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: Winogrande (5-shot)
78
+ type: winogrande
79
+ config: winogrande_xl
80
+ split: validation
81
+ args:
82
+ num_few_shot: 5
83
+ metrics:
84
+ - type: acc
85
+ value: 72.38
86
+ name: accuracy
87
+ source:
88
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
89
+ name: Open LLM Leaderboard
90
+ - task:
91
+ type: text-generation
92
+ name: Text Generation
93
+ dataset:
94
+ name: GSM8k (5-shot)
95
+ type: gsm8k
96
+ config: main
97
+ split: test
98
+ args:
99
+ num_few_shot: 5
100
+ metrics:
101
+ - type: acc
102
+ value: 13.12
103
+ name: accuracy
104
+ source:
105
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lmsys/vicuna-13b-v1.5-16k
106
+ name: Open LLM Leaderboard
107
  ---
108
 
109
  # Vicuna Model Card
 
148
 
149
  ## Difference between different versions of Vicuna
150
 
151
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
152
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
153
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-v1.5-16k)
154
+
155
+ | Metric |Value|
156
+ |---------------------------------|----:|
157
+ |Avg. |54.97|
158
+ |AI2 Reasoning Challenge (25-Shot)|56.74|
159
+ |HellaSwag (10-Shot) |80.37|
160
+ |MMLU (5-Shot) |55.28|
161
+ |TruthfulQA (0-shot) |51.96|
162
+ |Winogrande (5-shot) |72.38|
163
+ |GSM8k (5-shot) |13.12|
164
+