nazneen commited on
Commit
571e897
β€’
1 Parent(s): d0c0df4

model documentation

Browse files
Files changed (1) hide show
  1. README.md +176 -2
README.md CHANGED
@@ -1,8 +1,182 @@
1
  ---
2
  language: ko
 
3
  tags:
4
  - gpt2
5
- license: cc-by-nc-sa-4.0
6
  ---
7
 
8
- For more details: https://github.com/SKT-AI/KoGPT2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language: ko
3
+ license: cc-by-nc-sa-4.0
4
  tags:
5
  - gpt2
 
6
  ---
7
 
8
+
9
+ # Model Card for kogpt2-base-v2
10
+
11
+ # Model Details
12
+
13
+ ## Model Description
14
+
15
+ [GPT-2](https://openai.com/blog/better-language-models/)λŠ” 주어진 ν…μŠ€νŠΈμ˜ λ‹€μŒ 단어λ₯Ό 잘 μ˜ˆμΈ‘ν•  수 μžˆλ„λ‘ ν•™μŠ΅λœ μ–Έμ–΄λͺ¨λΈμ΄λ©° λ¬Έμž₯ 생성에 μ΅œμ ν™” λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. `KoGPT2`λŠ” λΆ€μ‘±ν•œ ν•œκ΅­μ–΄ μ„±λŠ₯을 κ·Ήλ³΅ν•˜κΈ° μœ„ν•΄ 40GB μ΄μƒμ˜ ν…μŠ€νŠΈλ‘œ ν•™μŠ΅λœ ν•œκ΅­μ–΄ 디코더(`decoder`) μ–Έμ–΄λͺ¨λΈμž…λ‹ˆλ‹€.
16
+
17
+ - **Developed by:** SK Telecom
18
+ - **Shared by [Optional]:** SK Telecom
19
+ - **Model type:** Text Generation
20
+ - **Language(s) (NLP):** Korean
21
+ - **License:** cc-by-nc-sa-4.0
22
+ - **Parent Model:** GPT-2
23
+ - **Resources for more information:**
24
+ - [GitHub Repo](https://github.com/SKT-AI/KoGPT2/tree/master)
25
+ - [Model Demo Space](https://huggingface.co/spaces/gogamza/kogpt2-base-v2)
26
+
27
+
28
+
29
+ # Uses
30
+
31
+
32
+ ## Direct Use
33
+ This model can be used for the task of Text Generation
34
+
35
+ ## Downstream Use [Optional]
36
+
37
+ More information needed.
38
+
39
+ ## Out-of-Scope Use
40
+
41
+ The model should not be used to intentionally create hostile or alienating environments for people.
42
+
43
+ # Bias, Risks, and Limitations
44
+
45
+
46
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
47
+
48
+
49
+
50
+ ## Recommendations
51
+
52
+
53
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
54
+
55
+ # Training Details
56
+
57
+ ## Training Data
58
+ The model authors also note in the [GitHub Repo](https://github.com/SKT-AI/KoGPT2/tree/master):
59
+
60
+ [`tokenizers`](https://github.com/huggingface/tokenizers) νŒ¨ν‚€μ§€μ˜ `Character BPE tokenizer`둜 ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
61
+
62
+ 사전 ν¬κΈ°λŠ” 51,200 이며 λŒ€ν™”μ— 자주 μ“°μ΄λŠ” μ•„λž˜μ™€ 같은 이λͺ¨ν‹°μ½˜, 이λͺ¨μ§€ 등을 μΆ”κ°€ν•˜μ—¬ ν•΄λ‹Ή ν† ν°μ˜ 인식 λŠ₯λ ₯을 μ˜¬λ ΈμŠ΅λ‹ˆλ‹€.
63
+
64
+ > πŸ˜€, 😁, πŸ˜†, πŸ˜…, 🀣, .. , `:-)`, `:)`, `-)`, `(-:`...
65
+
66
+ [ν•œκ΅­μ–΄ μœ„ν‚€ λ°±κ³Ό](https://ko.wikipedia.org/) 이외, λ‰΄μŠ€, [λͺ¨λ‘μ˜ λ§λ­‰μΉ˜ v1.0](https://corpus.korean.go.kr/), [μ²­μ™€λŒ€ ꡭ민청원](https://github.com/akngs/petitions) λ“±μ˜ λ‹€μ–‘ν•œ 데이터가 λͺ¨λΈ ν•™μŠ΅μ— μ‚¬μš©λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
67
+
68
+
69
+ ## Training Procedure
70
+
71
+
72
+ ### Preprocessing
73
+
74
+ More information needed
75
+
76
+
77
+ ### Speeds, Sizes, Times
78
+ | Model | # of params | Type | # of layers | # of heads | ffn_dim | hidden_dims |
79
+ |--------------|:----:|:-------:|--------:|--------:|--------:|--------------:|
80
+ | `kogpt2-base-v2` | 125M | Decoder | 12 | 12 | 3072 | 768 |
81
+
82
+
83
+ # Evaluation
84
+
85
+
86
+ ## Testing Data, Factors & Metrics
87
+
88
+ ### Testing Data
89
+
90
+ More information needed
91
+
92
+
93
+ ### Factors
94
+ More information needed
95
+
96
+ ### Metrics
97
+
98
+ More information needed
99
+
100
+
101
+ ## Results
102
+
103
+
104
+ ### Classification or Regression
105
+ | | [NSMC](https://github.com/e9t/nsmc)(acc) | [KorSTS](https://github.com/kakaobrain/KorNLUDatasets)(spearman) |
106
+ |---|---|---|
107
+ | **KoGPT2 2.0** | 89.1 | 77.8 |
108
+
109
+
110
+ # Model Examination
111
+
112
+ More information needed
113
+
114
+ # Environmental Impact
115
+
116
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
117
+
118
+ - **Hardware Type:** More information needed
119
+ - **Hours used:** More information needed
120
+ - **Cloud Provider:** More information needed
121
+ - **Compute Region:** More information needed
122
+ - **Carbon Emitted:** More information needed
123
+
124
+ # Technical Specifications [optional]
125
+
126
+ ## Model Architecture and Objective
127
+
128
+ More information needed
129
+
130
+ ## Compute Infrastructure
131
+
132
+ More information needed
133
+
134
+ ### Hardware
135
+
136
+
137
+ More information needed
138
+
139
+ ### Software
140
+
141
+ More information needed.
142
+
143
+ # Citation
144
+
145
+
146
+ **BibTeX:**
147
+
148
+ More information needed
149
+
150
+ # Glossary [optional]
151
+ More information needed
152
+
153
+ # More Information [optional]
154
+ More information needed
155
+
156
+
157
+ # Model Card Authors [optional]
158
+
159
+ SK Telecom in collaboration with Ezi Ozoani and the Hugging Face team
160
+
161
+
162
+ # Model Card Contact
163
+ The model authors also note in the [GitHub Repo](https://github.com/SKT-AI/KoGPT2/tree/master)
164
+ > `KoGPT2` κ΄€λ ¨ μ΄μŠˆλŠ” [이곳](https://github.com/SKT-AI/KoGPT2/issues)에 μ˜¬λ €μ£Όμ„Έμš”.
165
+
166
+ # How to Get Started with the Model
167
+
168
+ Use the code below to get started with the model.
169
+
170
+ <details>
171
+ <summary> Click to expand </summary>
172
+
173
+ ```python
174
+ from transformers import AutoTokenizer, AutoModelForCausalLM
175
+
176
+ tokenizer = AutoTokenizer.from_pretrained("skt/kogpt2-base-v2")
177
+
178
+ model = AutoModelForCausalLM.from_pretrained("skt/kogpt2-base-v2")
179
+
180
+ ```
181
+ </details>
182
+