File size: 10,952 Bytes
0cd0c32
7504230
 
 
 
 
 
 
0cd0c32
7504230
 
 
 
 
 
 
 
 
8df93b8
7504230
 
 
8df93b8
7504230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e81136
 
 
 
 
 
 
7504230
5d14a68
7504230
 
 
4e81136
 
 
 
 
 
 
7504230
5d14a68
7504230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
language:
- ko
tags:
- pytorch
- causal-lm
license: apache-2.0

---
# Polyglot-Ko-3.8B

## Model Description
Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. Polyglot-Ko-3.8B is the second one.

| Hyperparameter       | Value                                                                                                                                  |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 3,809,974,272                                                                                                                           |
| \\(n_{layers}\\)     | 32                                                                                                                                     |
| \\(d_{model}\\)      | 3,072                                                                                                                                   |
| \\(d_{ff}\\)         | 12,288                                                                                                                                   |
| \\(n_{heads}\\)      | 24                                                                                                                                     |
| \\(d_{head}\\)       | 128                                                                                                                                    |
| \\(n_{ctx}\\)        | 2,048                                                                                                                                   |
| \\(n_{vocab}\\)      | 30,003 / 30,080                                                                                                                        |
| Positional Encoding  | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864)                                                                   |
| RoPE Dimensions      | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |

The model consists of 32 transformer layers with a model dimension of 3072, and a feedforward dimension of 12288. The model
dimension is split into 24 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 30003.

## Training data

Polyglot-Ko was trained on 1.2TB Korean Dataset, a large-scale curated dataset created by [TUNiB](https://tunib.ai/).

## Training procedure

Polyglot-Ko was trained for 219 billion tokens over 105,000 steps on 256 * A100 GPUs with [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. 

## How to use

This model can be easily loaded using the `AutoModelForCausalLM` functionality:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-3.8b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-3.8b")
```

## Data Risks

Polyglot models learn an inner representation of the various languages that can be used to extract features useful for downstream tasks.
The model is best at what it was pre-trained for, however, generating text from a prompt.

### Privacy considerations
General training algorithms for pre-trained language models have many hazards, that memorize personal information in training data. We added the following tokens to vocabulary to mitigate privacy problems and replaced much personal information with these tokens in data preprocessing steps.

* `<|acc|>` : bank account number
* `<|rrn|>` : resident registration number
* `<|tell|>` : phone number

### Limitations and Biases
The core functionality of Polyglot is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Polyglot it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Polyglot to produce factually accurate output. Depending upon the use case, Polyglot may produce socially unacceptable text.

As with all language models, it is hard to predict in advance how Polyglot will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.

### Legal Restrictions
Since there are laws in many countries related to data collection, we will collect data with due regard to the laws of those countries.
Additionally, we plan to use the dataset to train our models, but we do not plan to make the dataset publicly available.

## Evaluation results
We used the [KOBEST dataset](https://arxiv.org/abs/2204.04541), which consists of five Korean downstream tasks, for evaluation.
We added those tasks to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and utilized prompt templates described in the paper.
We evaluted our model as well as two other Korean language models, i.e., skt/ko-gpt-trinity-1.2B-v0.5 and kakaobrain/kogpt for comparison.
The following tables show the results when the number of few-shot examples differ. You can reproduce these results using [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts.

```console
python main.py \
   --model gpt2 \
   --model_args pretrained='EleutherAI/polyglot-ko-1.3b' \
   --tasks kobest_copa,kobest_hellaswag \
   --num_fewshot $YOUR_NUM_FEWSHOT \
   --batch_size $YOUR_BATCH_SIZE \
   --device $YOUR_DEVICE \
   --output_path $/path/to/output/
```

**We decided to show only COPA and HellaSwag from KOBEST because evaluated models performed similarly to random guesses or with high variance on other tasks.**

### COPA (F1)

| Model                                                                                          | params | 0-shot | 5-shot | 10-shot | 50-shot |
|------------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B)                                | 7.5B   | 0.6723 | 0.6731 | 0.6769  | 0.7119  |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) &dagger;   | 1.2B   | 0.6696 | 0.6477 | 0.6419  | 0.6514  |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) &ast;                              | 6.0B   | 0.7345 | 0.7287 | 0.7277  | 0.7479  |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours)       | 1.3B   | 0.7196 | 0.7193 | 0.7204  | 0.7206  |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this model) | 3.8B   | **0.7595** | **0.7608** | **0.7638**  | **0.7788**  |

<img src="https://user-images.githubusercontent.com/19511788/192492576-cdd80c5c-7c90-43e3-8a4b-7a8486878f23.png" width="800px">

### HellaSwag (F1)

| Model                                                                                          | params | 0-shot | 5-shot | 10-shot | 50-shot |
|------------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B)                                | 7.5B   | 0.4261 | 0.437  | 0.4409  | 0.4517  |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) &dagger;   | 1.2B   | 0.4036 | 0.4    | 0.4011  | 0.4214  |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) &ast;                              | 6.0B   | **0.4599** | 0.456  | 0.4616  | 0.4754  |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours)       | 1.3B   | 0.4013 | 0.3984 | 0.417   | 0.4416  |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this model) | 3.8B   | 0.4438 | **0.4786** | **0.4737**  | **0.4822**  |

<img src="https://user-images.githubusercontent.com/19511788/192492585-a976ee38-2967-446a-b577-94f219228f4d.png" width="800px">

<p><strong>&dagger;</strong> The model card of this model provides evaluation results for the KOBEST dataset, but when we evaluated the model with the prompts described in the paper, we can't get similar results to it. Therefore, we checked the KOBEST paper and found that the results were similar to the fine-tuning results reported in the paper. Because we evaluated by prompt-based generation without fine-tuning the model, the results provided by the model card for the this model may differ.</p>

<p><strong>&ast;</strong> Since this model does not provide evaluation results with KOBEST dataset, we evaluated the model using lm-evaluation-harness ourselves. you can reproduce this result using the source code included in the polyglot branch of lm-evaluation-harness.</p>

## Citation and Related Information
### BibTeX entry
If you find our work useful, please consider citing:
```bibtex
@misc{polyglot-ko,
  title = {{Polyglot-Ko: Open-Source Korean Autoregressive Language Model}},
  author = {Ko, Hyunwoong and Yang, Kichang and Ryu, Minho and Kim, Taekyun and Yang, Seungmu and Hyun, jiwung and Park, Sungho},
  url = {https://www.github.com/eleutherai/polyglot},
  month = {9},
  year = {2022},
}
```

### Licensing
All our models are licensed under the terms of the Apache License 2.0.

```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

However, the model has the potential to generate unpredictable text as mentioned. Therefore, we are not responsible for any damages resulting from the use of the model.

### Acknowledgement
This project would not have been possible without the computing resources provided by [Stability.ai](https://stability.ai). Thanks for providing a large amount of GPU resources. Furthermore, thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.