Files changed (1) hide show
  1. README.md +122 -6
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - code
6
- datasets:
7
- - open-phi/programming_books_llama
8
- - open-phi/textbooks
9
  tags:
10
  - merge
11
  - computer science
 
 
 
12
  inference:
13
  parameters:
14
  do_sample: true
@@ -18,7 +18,110 @@ inference:
18
  max_new_tokens: 250
19
  repetition_penalty: 1.15
20
  widget:
21
- - text: "To calculate the factorial of n, we can use the following function:"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
  # TinyMistral-248M-v2.5
24
  This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation.
@@ -52,4 +155,17 @@ This model can also answer basic questions, without needing to do any fine-tunin
52
 
53
  This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune.
54
 
55
- To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
  - code
5
+ license: apache-2.0
 
 
6
  tags:
7
  - merge
8
  - computer science
9
+ datasets:
10
+ - open-phi/programming_books_llama
11
+ - open-phi/textbooks
12
  inference:
13
  parameters:
14
  do_sample: true
 
18
  max_new_tokens: 250
19
  repetition_penalty: 1.15
20
  widget:
21
+ - text: 'To calculate the factorial of n, we can use the following function:'
22
+ model-index:
23
+ - name: TinyMistral-248M-v2.5
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: AI2 Reasoning Challenge (25-Shot)
30
+ type: ai2_arc
31
+ config: ARC-Challenge
32
+ split: test
33
+ args:
34
+ num_few_shot: 25
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 24.57
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: HellaSwag (10-Shot)
47
+ type: hellaswag
48
+ split: validation
49
+ args:
50
+ num_few_shot: 10
51
+ metrics:
52
+ - type: acc_norm
53
+ value: 27.49
54
+ name: normalized accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: MMLU (5-Shot)
63
+ type: cais/mmlu
64
+ config: all
65
+ split: test
66
+ args:
67
+ num_few_shot: 5
68
+ metrics:
69
+ - type: acc
70
+ value: 23.15
71
+ name: accuracy
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: TruthfulQA (0-shot)
80
+ type: truthful_qa
81
+ config: multiple_choice
82
+ split: validation
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: mc2
87
+ value: 46.72
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: Winogrande (5-shot)
96
+ type: winogrande
97
+ config: winogrande_xl
98
+ split: validation
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 47.83
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
107
+ name: Open LLM Leaderboard
108
+ - task:
109
+ type: text-generation
110
+ name: Text Generation
111
+ dataset:
112
+ name: GSM8k (5-shot)
113
+ type: gsm8k
114
+ config: main
115
+ split: test
116
+ args:
117
+ num_few_shot: 5
118
+ metrics:
119
+ - type: acc
120
+ value: 0.0
121
+ name: accuracy
122
+ source:
123
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
124
+ name: Open LLM Leaderboard
125
  ---
126
  # TinyMistral-248M-v2.5
127
  This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation.
 
155
 
156
  This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune.
157
 
158
+ To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API.
159
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
160
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248M-v2.5)
161
+
162
+ | Metric |Value|
163
+ |---------------------------------|----:|
164
+ |Avg. |28.29|
165
+ |AI2 Reasoning Challenge (25-Shot)|24.57|
166
+ |HellaSwag (10-Shot) |27.49|
167
+ |MMLU (5-Shot) |23.15|
168
+ |TruthfulQA (0-shot) |46.72|
169
+ |Winogrande (5-shot) |47.83|
170
+ |GSM8k (5-shot) | 0.00|
171
+