Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,141 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- ko
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
inference: false
|
7 |
+
tags:
|
8 |
+
- solar
|
9 |
+
- mistral
|
10 |
+
- pytorch
|
11 |
+
- solar-ko
|
12 |
+
library_name: transformers
|
13 |
license: cc-by-nc-sa-4.0
|
14 |
---
|
15 |
+
|
16 |
+
**Update Log**
|
17 |
+
|
18 |
+
- 2024.02.19: Initial Test version Release of SOLAR-KOEN-10.8B
|
19 |
+
|
20 |
+
# **SOLAR-KOEN** ⭐🇰🇷
|
21 |
+
|
22 |
+
Solar-KoEn represents an advanced iteration of the upstage/SOLAR-10.7B-v1.0 model, featuring an expanded vocabulary and the inclusion of a Korean+English corpus for enhanced pretraining.
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
|
26 |
+
**Model Developers:** Junbum Lee (Beomi) & Taekyoon Choi (Taekyoon)
|
27 |
+
|
28 |
+
**Variations:** Solar-KoEn is available with one parameter sizes — 10.8B with Continual Pretrained version.
|
29 |
+
|
30 |
+
**Input:** The model accepts only text input.
|
31 |
+
|
32 |
+
**Output:** The model produces text output exclusively.
|
33 |
+
|
34 |
+
**Model Architecture:**
|
35 |
+
|
36 |
+
SOLAR-KO-10.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2.
|
37 |
+
|
38 |
+
| |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate|
|
39 |
+
|---|---|---|---|---|---|---|
|
40 |
+
|SOLAR-KOEN-10.8B|*A curated mix of Korean+English Corpora*|10.8B|2k|O|>15B*|5e<sup>-5</sup>|
|
41 |
+
|
42 |
+
**Training Corpus**
|
43 |
+
|
44 |
+
The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below:
|
45 |
+
|
46 |
+
- AI Hub: [corpus/AI_HUB](./corpus/AI_HUB)
|
47 |
+
- Only the `Training` segment of the data was used.
|
48 |
+
- The `Validation` and `Test` segments were deliberately excluded.
|
49 |
+
- Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS)
|
50 |
+
|
51 |
+
The final JSONL dataset used to train this model is approximately 61GB in size.
|
52 |
+
|
53 |
+
Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original SOLAR tokenizer, >60 billion tokens.)
|
54 |
+
|
55 |
+
**Vocab Expansion**
|
56 |
+
|
57 |
+
| Model Name | Vocabulary Size | Description |
|
58 |
+
| --- | --- | --- |
|
59 |
+
| Original Solar | 32000 | Sentencepiece BPE |
|
60 |
+
| **Expanded SOLAR-KO-10.7B** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
|
61 |
+
|
62 |
+
**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
|
63 |
+
|
64 |
+
- SOLAR-10.7B: 26 tokens
|
65 |
+
- SOLAR-KO-10.7b: 10 tokens
|
66 |
+
|
67 |
+
| Model | Tokens |
|
68 |
+
| --- | --- |
|
69 |
+
| SOLAR-10.7B | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '날', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '좋', '네', '요', '.']` |
|
70 |
+
| SOLAR-KO-10.7B | `['▁안', '녕', '하세요', ',', '▁오늘', '은', '▁날', '씨가', '▁좋네요', '.']` |
|
71 |
+
|
72 |
+
**Tokenizing "Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!"**
|
73 |
+
|
74 |
+
- SOLAR-10.7B: 22 tokens
|
75 |
+
- SOLAR-KO-10.7b: 22 tokens
|
76 |
+
|
77 |
+
| Model | Tokens |
|
78 |
+
| --- | --- |
|
79 |
+
| SOLAR-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
|
80 |
+
| SOLAR-KO-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
|
81 |
+
|
82 |
+
# LICENSE
|
83 |
+
|
84 |
+
Apache 2.0
|
85 |
+
|
86 |
+
# **Model Benchmark**
|
87 |
+
|
88 |
+
## LM Eval Harness - Korean (polyglot branch)
|
89 |
+
|
90 |
+
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
|
91 |
+
- 5-shot scores
|
92 |
+
|
93 |
+
| Task |Version| Metric | Value | |Stderr|
|
94 |
+
|-------------------|------:|------------|------:|---|-----:|
|
95 |
+
|klue_mrc | 0|exact |50.2140| | |
|
96 |
+
| | |f1 |54.0330| | |
|
97 |
+
| | |HasAns_exact|73.1786| | |
|
98 |
+
| | |HasAns_f1 |78.7442| | |
|
99 |
+
| | |best_exact |56.9594| | |
|
100 |
+
| | |best_f1 |60.3743| | |
|
101 |
+
|korquad | 1|exact_match |81.0530| | |
|
102 |
+
| | |f1 |87.6418| | |
|
103 |
+
|klue_nli | 0|acc | 0.4540|± |0.0091|
|
104 |
+
|klue_sts | 0|acc | 0.3410|± |0.0208|
|
105 |
+
| | |f1 | 0.4896|± |0.0237|
|
106 |
+
|klue_ynat | 0|acc | 0.6308|± |0.0051|
|
107 |
+
| | |macro_f1 | 0.6086|± |0.0057|
|
108 |
+
|kobest_boolq | 0|acc | 0.8711|± |0.0089|
|
109 |
+
| | |macro_f1 | 0.8705|± |0.0090|
|
110 |
+
|kobest_copa | 0|acc | 0.8500|± |0.0113|
|
111 |
+
| | |macro_f1 | 0.8498|± |0.0113|
|
112 |
+
|kobest_hellaswag | 0|acc | 0.5180|± |0.0224|
|
113 |
+
| | |acc_norm | 0.6180|± |0.0218|
|
114 |
+
| | |macro_f1 | 0.5138|± |0.0224|
|
115 |
+
|kobest_sentineg | 0|acc | 0.9723|± |0.0082|
|
116 |
+
| | |macro_f1 | 0.9723|± |0.0083|
|
117 |
+
|kobest_wic | 0|acc | 0.5825|± |0.0139|
|
118 |
+
| | |macro_f1 | 0.4952|± |0.0140|
|
119 |
+
|kohatespeech_apeach| 0|acc | 0.7034|± |0.0074|
|
120 |
+
| | |macro_f1 | 0.7033|± |0.0074|
|
121 |
+
|nsmc | 0|acc | 0.8738|± |0.0015|
|
122 |
+
|pawsx_ko | 0|acc | 0.5510|± |0.0111|
|
123 |
+
|kmmlu_direct | 0|exact_match | 0.4220|± |0.0909|
|
124 |
+
|
125 |
+
|
126 |
+
## Citation
|
127 |
+
|
128 |
+
```
|
129 |
+
@misc {solar_koen_junbum_taekyoon_2024,
|
130 |
+
author = { {L. Junbum, Taekyoon Choi} },
|
131 |
+
title = { Solar-KoEn-10.8b },
|
132 |
+
year = 2024,
|
133 |
+
url = { https://huggingface.co/beomi/SOLAR-KOEN-10.8B },
|
134 |
+
publisher = { Hugging Face }
|
135 |
+
}
|
136 |
+
|
137 |
+
```
|
138 |
+
|
139 |
+
## Acknowledgements
|
140 |
+
|
141 |
+
- Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program.
|