Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,6 +1,109 @@
1
  ---
2
- inference: false
3
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
  # Overview
6
  This model has been pruned to 10% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
@@ -45,4 +148,17 @@ See more details in the "Training Details of Vicuna Models" section in the appen
45
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
46
 
47
  ## Difference between different versions of Vicuna
48
- See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: apache-2.0
3
+ inference: false
4
+ model-index:
5
+ - name: vicuna-7b-v1.3-attention-sparsity-10
6
+ results:
7
+ - task:
8
+ type: text-generation
9
+ name: Text Generation
10
+ dataset:
11
+ name: AI2 Reasoning Challenge (25-Shot)
12
+ type: ai2_arc
13
+ config: ARC-Challenge
14
+ split: test
15
+ args:
16
+ num_few_shot: 25
17
+ metrics:
18
+ - type: acc_norm
19
+ value: 52.22
20
+ name: normalized accuracy
21
+ source:
22
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
23
+ name: Open LLM Leaderboard
24
+ - task:
25
+ type: text-generation
26
+ name: Text Generation
27
+ dataset:
28
+ name: HellaSwag (10-Shot)
29
+ type: hellaswag
30
+ split: validation
31
+ args:
32
+ num_few_shot: 10
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 77.05
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: MMLU (5-Shot)
45
+ type: cais/mmlu
46
+ config: all
47
+ split: test
48
+ args:
49
+ num_few_shot: 5
50
+ metrics:
51
+ - type: acc
52
+ value: 47.93
53
+ name: accuracy
54
+ source:
55
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
56
+ name: Open LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: TruthfulQA (0-shot)
62
+ type: truthful_qa
63
+ config: multiple_choice
64
+ split: validation
65
+ args:
66
+ num_few_shot: 0
67
+ metrics:
68
+ - type: mc2
69
+ value: 46.87
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: Winogrande (5-shot)
78
+ type: winogrande
79
+ config: winogrande_xl
80
+ split: validation
81
+ args:
82
+ num_few_shot: 5
83
+ metrics:
84
+ - type: acc
85
+ value: 69.53
86
+ name: accuracy
87
+ source:
88
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
89
+ name: Open LLM Leaderboard
90
+ - task:
91
+ type: text-generation
92
+ name: Text Generation
93
+ dataset:
94
+ name: GSM8k (5-shot)
95
+ type: gsm8k
96
+ config: main
97
+ split: test
98
+ args:
99
+ num_few_shot: 5
100
+ metrics:
101
+ - type: acc
102
+ value: 13.19
103
+ name: accuracy
104
+ source:
105
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10
106
+ name: Open LLM Leaderboard
107
  ---
108
  # Overview
109
  This model has been pruned to 10% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
 
148
  Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
149
 
150
  ## Difference between different versions of Vicuna
151
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
152
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
153
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10)
154
+
155
+ | Metric |Value|
156
+ |---------------------------------|----:|
157
+ |Avg. |51.13|
158
+ |AI2 Reasoning Challenge (25-Shot)|52.22|
159
+ |HellaSwag (10-Shot) |77.05|
160
+ |MMLU (5-Shot) |47.93|
161
+ |TruthfulQA (0-shot) |46.87|
162
+ |Winogrande (5-shot) |69.53|
163
+ |GSM8k (5-shot) |13.19|
164
+