File size: 6,644 Bytes
2d97cef
 
 
0389e88
 
 
 
 
 
 
 
 
 
 
 
 
 
2d97cef
0389e88
 
 
 
 
 
 
 
 
 
2d97cef
0389e88
 
 
 
195bc34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce5c499
04ed364
c1f99af
 
 
 
ce5c499
c1f99af
ce5c499
c1f99af
 
 
 
 
 
 
a88a538
 
c1f99af
ce5c499
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04ed364
 
 
195bc34
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: apache-2.0
widget:
- text: My name is El Microondas the Wise, and
  example_title: El Microondas
- text: Kennesaw State University is a public
  example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
    developing the award winning Halo series of video games. They also made Destiny.
    The studio was founded
  example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
  example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
  example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
    have water, but no fish. What am I?

    Answer:'
  example_title: Riddle
- text: The process of photosynthesis involves the conversion of
  example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
    and a loaf of bread. When she got home, she realized she forgot
  example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
    and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
    they meet if the distance between the stations is 300 miles?

    To determine'
  example_title: Math Problem
- text: In the context of computer programming, an algorithm is
  example_title: Algorithm Definition
model-index:
- name: Mixsmol-4x400M-v0.1-epoch1
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 22.87
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 30.57
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 25.28
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 39.03
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 52.8
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 0.15
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1
      name: Open LLM Leaderboard
---
# Mixsmol-4x400M-v0.1 by Ontocord
This is the first checkpoint (Epoch 1) of Mixsmol-4x400M-v0.1
Note that this is an experimental in data mixing. Therefore, we only trained the model on 50B tokens (95% English and 5% Vietnamese) to test the following:
- Reasoining capabilities through high-quality synthetic textbooks data pretraining 
- Crosslingual understanding through machine translation and multilingual + multiple tasks pretraining

After verifying our hypothesis with this run, we will schedule a second run on bigger data and compute for it to achieve its maximum capability.

## Data
- Synthetic Textbooks: 8M samples
- RefinedWeb: 1M samples
- RedPajama-v2: 500K samples
- MathPile: Everything
- ThePile: MiniPile Subset
- GoodWiki
- The Stack Smol XL
- The Vault: train_small split
- Instruction Pretraining: 250k samples

|    Tasks    |Version|Filter|n-shot| Metric |Value |   |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml   |none  |    25|acc     |0.1937|±  |0.0115|
|             |       |none  |    25|acc_norm|0.2329|±  |0.0124|
|hellaswag|Yaml   |none  |    10|acc     |0.2856|±  |0.0045|
|         |       |none  |    10|acc_norm|0.3090|±  |0.0046|
|mmlu              |N/A    |none  |     0|acc   |0.2536|±  |0.0483|
| - humanities     |N/A    |none  |     5|acc   |0.2408|±  |0.0341|
| - other          |N/A    |none  |     5|acc   |0.2475|±  |0.0443|
| - social_sciences|N/A    |none  |     5|acc   |0.2567|±  |0.0456|
| - stem           |N/A    |none  |     5|acc   |0.2756|±  |0.0653|
|truthfulqa_mc2|Yaml   |none  |     0|acc   |0.3909|±  |0.0148|
|winogrande|Yaml   |none  |     5|acc   |0.5107|±  | 0.014|
|gsm8k|Yaml   |get-answer|     5|exact_match|    0|±  |     0|

## Contribution
This work is a shared contribution between **Ontocord, BEE-spoke-data and VILM**

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vilm__Mixsmol-4x400M-v0.1-epoch1)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |28.45|
|AI2 Reasoning Challenge (25-Shot)|22.87|
|HellaSwag (10-Shot)              |30.57|
|MMLU (5-Shot)                    |25.28|
|TruthfulQA (0-shot)              |39.03|
|Winogrande (5-shot)              |52.80|
|GSM8k (5-shot)                   | 0.15|