RichardErkhov commited on
Commit
144a8b4
1 Parent(s): c50b955

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +219 -0
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ polyglot-ko-12.8b - bnb 8bits
11
+ - Model creator: https://huggingface.co/EleutherAI/
12
+ - Original model: https://huggingface.co/EleutherAI/polyglot-ko-12.8b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - ko
21
+ tags:
22
+ - pytorch
23
+ - causal-lm
24
+ license: apache-2.0
25
+
26
+ ---
27
+ # Polyglot-Ko-12.8B
28
+
29
+ ## Model Description
30
+ Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
31
+
32
+ | Hyperparameter | Value |
33
+ |----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
34
+ | \\(n_{parameters}\\) | 12,898,631,680 |
35
+ | \\(n_{layers}\\) | 40 |
36
+ | \\(d_{model}\\) | 5120 |
37
+ | \\(d_{ff}\\) | 20,480 |
38
+ | \\(n_{heads}\\) | 40 |
39
+ | \\(d_{head}\\) | 128 |
40
+ | \\(n_{ctx}\\) | 2,048 |
41
+ | \\(n_{vocab}\\) | 30,003 / 30,080 |
42
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
43
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
44
+
45
+ The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
46
+ dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
47
+ dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
48
+
49
+ ## Training data
50
+
51
+ Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
52
+
53
+ | Source |Size (GB) | Link |
54
+ |-------------------------------------|---------|------------------------------------------|
55
+ | Korean blog posts | 682.3 | - |
56
+ | Korean news dataset | 87.0 | - |
57
+ | Modu corpus | 26.4 |corpus.korean.go.kr |
58
+ | Korean patent dataset | 19.0 | - |
59
+ | Korean Q & A dataset | 18.1 | - |
60
+ | KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
61
+ | Korean fiction dataset | 6.1 | - |
62
+ | Korean online comments | 4.2 | - |
63
+ | Korean wikipedia | 1.4 | ko.wikipedia.org |
64
+ | Clova call | < 1.0 | github.com/clovaai/ClovaCall |
65
+ | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
66
+ | Korean hate speech dataset | < 1.0 | - |
67
+ | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
68
+ | AIHub various tasks datasets | < 1.0 |aihub.or.kr |
69
+ | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
70
+
71
+ Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
72
+
73
+ * `<|acc|>` : bank account number
74
+ * `<|rrn|>` : resident registration number
75
+ * `<|tell|>` : phone number
76
+
77
+ ## Training procedure
78
+ Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
79
+
80
+ ## How to use
81
+
82
+ This model can be easily loaded using the `AutoModelForCausalLM` class:
83
+
84
+ ```python
85
+ from transformers import AutoTokenizer, AutoModelForCausalLM
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-12.8b")
88
+ model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-12.8b")
89
+ ```
90
+
91
+ ## Evaluation results
92
+
93
+ We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
94
+
95
+ The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
96
+
97
+ In case of WiC dataset, all models show random performance.
98
+
99
+ ```console
100
+ python main.py \
101
+ --model gpt2 \
102
+ --model_args pretrained='EleutherAI/polyglot-ko-3.8b' \
103
+ --tasks kobest_copa,kobest_hellaswag \
104
+ --num_fewshot $YOUR_NUM_FEWSHOT \
105
+ --batch_size $YOUR_BATCH_SIZE \
106
+ --device $YOUR_DEVICE \
107
+ --output_path $/path/to/output/
108
+ ```
109
+
110
+ ### COPA (F1)
111
+
112
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
113
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
114
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
115
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
116
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
117
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
118
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
119
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
120
+ | **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.7937** | **0.8108** | **0.8037** | **0.8369** |
121
+
122
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px">
123
+
124
+ ### HellaSwag (F1)
125
+
126
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
127
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
128
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
129
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
130
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
131
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
132
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
133
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
134
+ | **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.5954** | **0.6306** | **0.6098** | **0.6118** |
135
+
136
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px">
137
+
138
+ ### BoolQ (F1)
139
+
140
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
141
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
142
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
143
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
144
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
145
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
146
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
147
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
148
+ | **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.4818** | **0.6041** | **0.6289** | **0.6448** |
149
+
150
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px">
151
+
152
+ ### SentiNeg (F1)
153
+
154
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
155
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
156
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
157
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
158
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
159
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
160
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
161
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
162
+ | **[EleutherAI/polyglot-ko-12.8b (this)](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)** | **12.8B** | **0.9117** | **0.9015** | **0.9345** | **0.9723** |
163
+
164
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px">
165
+
166
+ ### WiC (F1)
167
+
168
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
169
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
170
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 |
171
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 |
172
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 |
173
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 |
174
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 |
175
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 |
176
+ | **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** | **12.8B** | **0.3985** | **0.3683** | **0.3307** | **0.3273** |
177
+
178
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px">
179
+
180
+ ## Limitations and Biases
181
+
182
+ Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
183
+
184
+ ## Citation and Related Information
185
+ ### BibTeX entry
186
+ If you find our work useful, please consider citing:
187
+ ```bibtex
188
+ @misc{ko2023technical,
189
+ title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models},
190
+ author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
191
+ year={2023},
192
+ eprint={2306.02254},
193
+ archivePrefix={arXiv},
194
+ primaryClass={cs.CL}
195
+ }
196
+ ```
197
+
198
+ ### Licensing
199
+ All our models are licensed under the terms of the Apache License 2.0.
200
+
201
+ ```
202
+ Licensed under the Apache License, Version 2.0 (the "License");
203
+ you may not use this file except in compliance with the License.
204
+ You may obtain a copy of the License at
205
+
206
+ http://www.apache.org/licenses/LICENSE-2.0
207
+
208
+ Unless required by applicable law or agreed to in writing, software
209
+ distributed under the License is distributed on an "AS IS" BASIS,
210
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
211
+ See the License for the specific language governing permissions and
212
+ limitations under the License.
213
+ ```
214
+
215
+ ### Acknowledgement
216
+
217
+ This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
218
+
219
+