Text Generation
Transformers
Safetensors
English
qwen2
conversational
Eval Results
Inference Endpoints
text-generation-inference
leaderboard-pr-bot commited on
Commit
832fcc8
1 Parent(s): 108c532

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +120 -4
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
- library_name: transformers
 
3
  license: other
 
4
  datasets:
5
  - Open-Orca/SlimOrca
6
  - m-a-p/Code-Feedback
@@ -12,8 +14,6 @@ datasets:
12
  - LDJnr/Capybara
13
  - jondurbin/airoboros-3.2
14
  - microsoft/orca-math-word-problems-200k
15
- language:
16
- - en
17
  inference:
18
  parameters:
19
  do_sample: true
@@ -22,6 +22,109 @@ inference:
22
  top_k: 40
23
  max_new_tokens: 250
24
  repetition_penalty: 1.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  # Hercules-Mini-1.8B
@@ -105,4 +208,17 @@ Coming soon
105
 
106
  #### Hardware
107
 
108
- We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: other
5
+ library_name: transformers
6
  datasets:
7
  - Open-Orca/SlimOrca
8
  - m-a-p/Code-Feedback
 
14
  - LDJnr/Capybara
15
  - jondurbin/airoboros-3.2
16
  - microsoft/orca-math-word-problems-200k
 
 
17
  inference:
18
  parameters:
19
  do_sample: true
 
22
  top_k: 40
23
  max_new_tokens: 250
24
  repetition_penalty: 1.1
25
+ model-index:
26
+ - name: Orca-2.0-Tau-1.8B
27
+ results:
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: AI2 Reasoning Challenge (25-Shot)
33
+ type: ai2_arc
34
+ config: ARC-Challenge
35
+ split: test
36
+ args:
37
+ num_few_shot: 25
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 37.12
41
+ name: normalized accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
44
+ name: Open LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: HellaSwag (10-Shot)
50
+ type: hellaswag
51
+ split: validation
52
+ args:
53
+ num_few_shot: 10
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 61.13
57
+ name: normalized accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: MMLU (5-Shot)
66
+ type: cais/mmlu
67
+ config: all
68
+ split: test
69
+ args:
70
+ num_few_shot: 5
71
+ metrics:
72
+ - type: acc
73
+ value: 45.27
74
+ name: accuracy
75
+ source:
76
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
77
+ name: Open LLM Leaderboard
78
+ - task:
79
+ type: text-generation
80
+ name: Text Generation
81
+ dataset:
82
+ name: TruthfulQA (0-shot)
83
+ type: truthful_qa
84
+ config: multiple_choice
85
+ split: validation
86
+ args:
87
+ num_few_shot: 0
88
+ metrics:
89
+ - type: mc2
90
+ value: 39.1
91
+ source:
92
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: Winogrande (5-shot)
99
+ type: winogrande
100
+ config: winogrande_xl
101
+ split: validation
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 59.59
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
110
+ name: Open LLM Leaderboard
111
+ - task:
112
+ type: text-generation
113
+ name: Text Generation
114
+ dataset:
115
+ name: GSM8k (5-shot)
116
+ type: gsm8k
117
+ config: main
118
+ split: test
119
+ args:
120
+ num_few_shot: 5
121
+ metrics:
122
+ - type: acc
123
+ value: 28.96
124
+ name: accuracy
125
+ source:
126
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
127
+ name: Open LLM Leaderboard
128
  ---
129
 
130
  # Hercules-Mini-1.8B
 
208
 
209
  #### Hardware
210
 
211
+ We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.
212
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
213
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__Orca-2.0-Tau-1.8B)
214
+
215
+ | Metric |Value|
216
+ |---------------------------------|----:|
217
+ |Avg. |45.20|
218
+ |AI2 Reasoning Challenge (25-Shot)|37.12|
219
+ |HellaSwag (10-Shot) |61.13|
220
+ |MMLU (5-Shot) |45.27|
221
+ |TruthfulQA (0-shot) |39.10|
222
+ |Winogrande (5-shot) |59.59|
223
+ |GSM8k (5-shot) |28.96|
224
+