RichardErkhov commited on
Commit
27542ee
1 Parent(s): f3256b3

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +222 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ polyglot-ko-3.8b - bnb 8bits
11
+ - Model creator: https://huggingface.co/EleutherAI/
12
+ - Original model: https://huggingface.co/EleutherAI/polyglot-ko-3.8b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - ko
21
+ tags:
22
+ - pytorch
23
+ - causal-lm
24
+ license: apache-2.0
25
+
26
+ ---
27
+ # Polyglot-Ko-3.8B
28
+
29
+ ## Model Description
30
+ Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
31
+
32
+ | Hyperparameter | Value |
33
+ |----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
34
+ | \\(n_{parameters}\\) | 3,809,974,272 |
35
+ | \\(n_{layers}\\) | 32 |
36
+ | \\(d_{model}\\) | 3,072 |
37
+ | \\(d_{ff}\\) | 12,288 |
38
+ | \\(n_{heads}\\) | 24 |
39
+ | \\(d_{head}\\) | 128 |
40
+ | \\(n_{ctx}\\) | 2,048 |
41
+ | \\(n_{vocab}\\) | 30,003 / 30,080 |
42
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
43
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
44
+
45
+ The model consists of 32 transformer layers with a model dimension of 3072, and a feedforward dimension of 12288. The model
46
+ dimension is split into 24 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
47
+ dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
48
+
49
+ ## Training data
50
+
51
+ Polyglot-Ko-3.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
52
+
53
+ | Source |Size (GB) | Link |
54
+ |-------------------------------------|---------|------------------------------------------|
55
+ | Korean blog posts | 682.3 | - |
56
+ | Korean news dataset | 87.0 | - |
57
+ | Modu corpus | 26.4 |corpus.korean.go.kr |
58
+ | Korean patent dataset | 19.0 | - |
59
+ | Korean Q & A dataset | 18.1 | - |
60
+ | KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
61
+ | Korean fiction dataset | 6.1 | - |
62
+ | Korean online comments | 4.2 | - |
63
+ | Korean wikipedia | 1.4 | ko.wikipedia.org |
64
+ | Clova call | < 1.0 | github.com/clovaai/ClovaCall |
65
+ | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
66
+ | Korean hate speech dataset | < 1.0 | - |
67
+ | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
68
+ | AIHub various tasks datasets | < 1.0 |aihub.or.kr |
69
+ | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
70
+
71
+ Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
72
+
73
+ * `<|acc|>` : bank account number
74
+ * `<|rrn|>` : resident registration number
75
+ * `<|tell|>` : phone number
76
+
77
+ ## Training procedure
78
+ Polyglot-Ko-3.8B was trained for 219 billion tokens over 105,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
79
+
80
+ ## How to use
81
+
82
+ This model can be easily loaded using the `AutoModelForCausalLM` class:
83
+
84
+ ```python
85
+ from transformers import AutoTokenizer, AutoModelForCausalLM
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-3.8b")
88
+ model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-3.8b")
89
+ ```
90
+
91
+ ## Evaluation results
92
+
93
+ We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
94
+
95
+ The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
96
+
97
+ In case of WiC dataset, all models show random performance.
98
+
99
+ ```console
100
+ python main.py \
101
+ --model gpt2 \
102
+ --model_args pretrained='EleutherAI/polyglot-ko-3.8b' \
103
+ --tasks kobest_copa,kobest_hellaswag \
104
+ --num_fewshot $YOUR_NUM_FEWSHOT \
105
+ --batch_size $YOUR_BATCH_SIZE \
106
+ --device $YOUR_DEVICE \
107
+ --output_path $/path/to/output/
108
+ ```
109
+
110
+ ### COPA (F1)
111
+
112
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
113
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
114
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
115
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
116
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
117
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
118
+ | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** |
119
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
120
+ | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
121
+
122
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px">
123
+
124
+ ### HellaSwag (F1)
125
+
126
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
127
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
128
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
129
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
130
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
131
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
132
+ | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.5707** | **0.5830** | **0.5670** | **0.5787** |
133
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
134
+ | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
135
+
136
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px">
137
+
138
+ ### BoolQ (F1)
139
+
140
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
141
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
142
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
143
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
144
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
145
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
146
+ | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4320** | **0.5263** | **0.4930** | **0.4038** |
147
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
148
+ | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
149
+
150
+
151
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px">
152
+
153
+ ### SentiNeg (F1)
154
+
155
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
156
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
157
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
158
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
159
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
160
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
161
+ | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4858** | **0.7950** | **0.7320** | **0.7851** |
162
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
163
+ | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
164
+
165
+
166
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px">
167
+
168
+ ### WiC (F1)
169
+
170
+ | Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
171
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
172
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 |
173
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 |
174
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 |
175
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 |
176
+ | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.3390** | **0.4944** | **0.4203** | **0.3835** |
177
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 |
178
+ | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 |
179
+
180
+
181
+ <img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px">
182
+
183
+ ## Limitations and Biases
184
+
185
+ Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
186
+
187
+ ## Citation and Related Information
188
+ ### BibTeX entry
189
+ If you find our work useful, please consider citing:
190
+ ```bibtex
191
+ @misc{ko2023technical,
192
+ title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models},
193
+ author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
194
+ year={2023},
195
+ eprint={2306.02254},
196
+ archivePrefix={arXiv},
197
+ primaryClass={cs.CL}
198
+ }
199
+ ```
200
+
201
+ ### Licensing
202
+ All our models are licensed under the terms of the Apache License 2.0.
203
+
204
+ ```
205
+ Licensed under the Apache License, Version 2.0 (the "License");
206
+ you may not use this file except in compliance with the License.
207
+ You may obtain a copy of the License at
208
+
209
+ http://www.apache.org/licenses/LICENSE-2.0
210
+
211
+ Unless required by applicable law or agreed to in writing, software
212
+ distributed under the License is distributed on an "AS IS" BASIS,
213
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
214
+ See the License for the specific language governing permissions and
215
+ limitations under the License.
216
+ ```
217
+
218
+ ### Acknowledgement
219
+
220
+ This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
221
+
222
+