leaderboard-pr-bot commited on
Commit
4bc3c53
1 Parent(s): fec4e69

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -2,6 +2,109 @@
2
  license: other
3
  license_name: other
4
  license_link: LICENSE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
  Mistral 7B model fine-tuned on AEZAKMI v1 dataset that is derived from airoboros 2.2.1 and airoboros 2.2.
7
  Finetuned with axolotl, using qlora and nf4 double quant, around 2 epochs, batch size 8, lr 0.00008, lr scheduler cosine. Scheduled training was 5 epochs, but loss seemed fine after 2 so I finished it quicker.
@@ -13,3 +116,17 @@ Don't expect it to be good at math, riddles or be crazy smart. My end goal with
13
 
14
 
15
  Not sure what license it needs to have, given license of airoboros dataset. I'll leave it as other for now.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: other
3
  license_name: other
4
  license_link: LICENSE
5
+ model-index:
6
+ - name: Mistral-7B-AEZAKMI-v1
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: AI2 Reasoning Challenge (25-Shot)
13
+ type: ai2_arc
14
+ config: ARC-Challenge
15
+ split: test
16
+ args:
17
+ num_few_shot: 25
18
+ metrics:
19
+ - type: acc_norm
20
+ value: 58.87
21
+ name: normalized accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: HellaSwag (10-Shot)
30
+ type: hellaswag
31
+ split: validation
32
+ args:
33
+ num_few_shot: 10
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 82.01
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MMLU (5-Shot)
46
+ type: cais/mmlu
47
+ config: all
48
+ split: test
49
+ args:
50
+ num_few_shot: 5
51
+ metrics:
52
+ - type: acc
53
+ value: 58.72
54
+ name: accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: TruthfulQA (0-shot)
63
+ type: truthful_qa
64
+ config: multiple_choice
65
+ split: validation
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: mc2
70
+ value: 53.54
71
+ source:
72
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: Winogrande (5-shot)
79
+ type: winogrande
80
+ config: winogrande_xl
81
+ split: validation
82
+ args:
83
+ num_few_shot: 5
84
+ metrics:
85
+ - type: acc
86
+ value: 75.69
87
+ name: accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: GSM8k (5-shot)
96
+ type: gsm8k
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 0.68
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Mistral-7B-AEZAKMI-v1
107
+ name: Open LLM Leaderboard
108
  ---
109
  Mistral 7B model fine-tuned on AEZAKMI v1 dataset that is derived from airoboros 2.2.1 and airoboros 2.2.
110
  Finetuned with axolotl, using qlora and nf4 double quant, around 2 epochs, batch size 8, lr 0.00008, lr scheduler cosine. Scheduled training was 5 epochs, but loss seemed fine after 2 so I finished it quicker.
 
116
 
117
 
118
  Not sure what license it needs to have, given license of airoboros dataset. I'll leave it as other for now.
119
+
120
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
121
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_adamo1139__Mistral-7B-AEZAKMI-v1)
122
+
123
+ | Metric |Value|
124
+ |---------------------------------|----:|
125
+ |Avg. |54.92|
126
+ |AI2 Reasoning Challenge (25-Shot)|58.87|
127
+ |HellaSwag (10-Shot) |82.01|
128
+ |MMLU (5-Shot) |58.72|
129
+ |TruthfulQA (0-shot) |53.54|
130
+ |Winogrande (5-shot) |75.69|
131
+ |GSM8k (5-shot) | 0.68|
132
+