system HF staff commited on
Commit
0b7c991
0 Parent(s):

Update files from the datasets library (from 1.8.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.8.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - ko
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ ynat:
18
+ - text-classification
19
+ sts:
20
+ - text-scoring
21
+ nli:
22
+ - text-classification
23
+ ner:
24
+ - structure-prediction
25
+ re:
26
+ - structure-prediction
27
+ dp:
28
+ - structure-prediction
29
+ mrc:
30
+ - question-answering
31
+ wos:
32
+ - sequence-modeling
33
+ task_ids:
34
+ ynat:
35
+ - topic-classification
36
+ sts:
37
+ - semantic-similarity-scoring
38
+ nli:
39
+ - natural-language-inference
40
+ ner:
41
+ - named-entity-recognition
42
+ re:
43
+ - other-relation-extraction
44
+ dp:
45
+ - parsing
46
+ mrc:
47
+ - extractive-qa
48
+ wos:
49
+ - other-dialogue-state-tracking
50
+ paperswithcode_id: klue
51
+ ---
52
+
53
+ # Dataset Card for KLUE
54
+
55
+ ## Table of Contents
56
+ - [Dataset Description](#dataset-description)
57
+ - [Dataset Summary](#dataset-summary)
58
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
59
+ - [Languages](#languages)
60
+ - [Dataset Structure](#dataset-structure)
61
+ - [Data Instances](#data-instances)
62
+ - [Data Fields](#data-instances)
63
+ - [Data Splits](#data-instances)
64
+ - [Dataset Creation](#dataset-creation)
65
+ - [Curation Rationale](#curation-rationale)
66
+ - [Source Data](#source-data)
67
+ - [Annotations](#annotations)
68
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
69
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
70
+ - [Social Impact of Dataset](#social-impact-of-dataset)
71
+ - [Discussion of Biases](#discussion-of-biases)
72
+ - [Other Known Limitations](#other-known-limitations)
73
+ - [Additional Information](#additional-information)
74
+ - [Dataset Curators](#dataset-curators)
75
+ - [Licensing Information](#licensing-information)
76
+ - [Citation Information](#citation-information)
77
+
78
+ ## Dataset Description
79
+
80
+ - **Homepage:** https://klue-benchmark.com/
81
+ - **Repository:** https://github.com/KLUE-benchmark/KLUE
82
+ - **Paper:** [KLUE: Korean Language Understanding Evaluation](https://arxiv.org/abs/2105.09680)
83
+ - **Leaderboard:** [Leaderboard](https://klue-benchmark.com/leaderboard)
84
+ - **Point of Contact:** https://github.com/KLUE-benchmark/KLUE/issues
85
+
86
+ ### Dataset Summary
87
+
88
+ KLUE is a collection of 8 tasks to evaluate natural language understanding capability of Korean language models. We delibrately select the 8 tasks, which are Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking.
89
+
90
+ ### Supported Tasks and Leaderboards
91
+
92
+ Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking
93
+
94
+ ### Languages
95
+
96
+ `ko-KR`
97
+
98
+ ## Dataset Structure
99
+
100
+ ### Data Instances
101
+
102
+ #### ynat
103
+ An example of 'train' looks as follows.
104
+
105
+ ```
106
+ {'date': '2016.06.30. 오전 10:36',
107
+ 'guid': 'ynat-v1_train_00000',
108
+ 'label': 3,
109
+ 'title': '유튜브 내달 2일까지 크리에이터 지원 공간 운영',
110
+ 'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008508947'}
111
+ ```
112
+
113
+ #### sts
114
+ An example of 'train' looks as follows.
115
+
116
+ ```
117
+ {'guid': 'klue-sts-v1_train_00000',
118
+ 'labels': {'label': 3.7, 'real-label': 3.714285714285714, 'binary-label': 1},
119
+ 'sentence1': '숙소 위치는 찾기 쉽고 일반적인 한국의 반지하 숙소입니다.',
120
+ 'sentence2': '숙박시설의 위치는 쉽게 찾을 수 있고 한국의 대표적인 반지하 숙박시설입니다.',
121
+ 'source': 'airbnb-rtt'}
122
+ ```
123
+
124
+ #### nli
125
+ An example of 'train' looks as follows.
126
+
127
+ ```
128
+ {'guid': 'klue-nli-v1_train_00000',
129
+ 'hypothesis': '힛걸 진심 최고로 멋지다.',
130
+ 'label': 0,
131
+ 'premise': '힛걸 진심 최고다 그 어떤 히어로보다 멋지다',
132
+ 'source': 'NSMC'}
133
+ ```
134
+
135
+ #### ner
136
+ An example of 'train' looks as follows.
137
+
138
+ ```
139
+ {'tokens': ['특', '히', ' ', '영', '동', '고', '속', '도', '로', ' ', '강', '릉', ' ', '방', '향', ' ', '문', '막', '휴', '게', '소', '에', '서', ' ', '만', '종', '분', '기', '점', '까', '지', ' ', '5', '㎞', ' ', '구', '간', '에', '는', ' ', '승', '용', '차', ' ', '전', '용', ' ', '임', '시', ' ', '갓', '길', '차', '로', '제', '를', ' ', '운', '영', '하', '기', '로', ' ', '했', '다', '.'],
140
+ 'ner_tags': [12, 12, 12, 2, 3, 3, 3, 3, 3, 12, 2, 3, 12, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 8, 9, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
141
+ 'sentence': '특히 <영동고속도로:LC> <강릉:LC> 방향 <문막휴게소:LC>에서 <만종분기점:LC>까지 <5㎞:QT> 구간에는 승용차 전용 임시 갓길차로제를 운영하기로 했다.'}
142
+ ```
143
+
144
+ #### re
145
+ An example of 'train' looks as follows.
146
+
147
+ ```
148
+ {'guid': 'klue-re-v1_train_00000',
149
+ 'label': 0,
150
+ 'object_entity': {'word': '조지 해리슨',
151
+ 'start_idx': 13,
152
+ 'end_idx': 18,
153
+ 'type': 'PER'},
154
+ 'sentence': '〈Something〉는 조지 해리슨이 쓰고 비틀즈가 1969년 앨범 《Abbey Road》에 담은 노래다.',
155
+ 'source': 'wikipedia',
156
+ 'subject_entity': {'word': '비틀즈',
157
+ 'start_idx': 24,
158
+ 'end_idx': 26,
159
+ 'type': 'ORG'}}
160
+ ```
161
+
162
+ #### dp
163
+ An example of 'train' looks as follows.
164
+
165
+ ```
166
+ {'deprel': ['NP', 'NP_OBJ', 'VP', 'NP', 'NP_SBJ', 'NP', 'NP_MOD', 'NP_CNJ', 'NP_CNJ', 'NP', 'NP', 'NP_OBJ', 'AP', 'VP'],
167
+ 'head': [2, 3, 14, 5, 14, 7, 10, 10, 10, 11, 12, 14, 14, 0],
168
+ 'index': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
169
+ 'lemma': ['해당', '그림 을', '보 면', '디즈니', '공주 들 이', '브리트니', '스피어스 의', '앨범 이나', '뮤직 비디오 ,', '화보', '속', '모습 을', '똑같이', '재연 하 였 다 .'],
170
+ 'pos': ['NNG', 'NNG+JKO', 'VV+EC', 'NNP', 'NNG+XSN+JKS', 'NNP', 'NNP+JKG', 'NNG+JC', 'NNG+NNG+SP', 'NNG', 'NNG', 'NNG+JKO', 'MAG', 'NNG+XSA+EP+EF+SF'],
171
+ 'sentence': '해당 그림을 보면 디즈니 공주들이 브리트니 스피어스의 앨범이나 뮤직비디오, 화보 속 모습을 똑같이 재연했다.',
172
+ 'word_form': ['해당', '그림을', '보면', '디즈니', '공주들이', '브리트니', '스피어스의', '앨범이나', '뮤직비디오,', '화보', '속', '모습을', '똑같이', '재연했다.']}
173
+ ```
174
+
175
+ #### mrc
176
+ An example of 'train' looks as follows.
177
+
178
+ ```
179
+ {'answers': {'answer_start': [478, 478], 'text': ['한 달가량', '한 달']},
180
+ 'context': '올여름 장마가 17일 제주도에서 시작됐다. 서울 등 중부지방은 예년보다 사나흘 정도 늦은 이달 말께 장마가 시작될 전망이다.17일 기상청에 따르면 제주도 남쪽 먼바다에 있는 장마전선의 영향으로 이날 제주도 산간 및 내륙지역에 호우주의보가 내려지면서 곳곳에 100㎜에 육박하는 많은 비가 내렸다. 제주의 장마는 평년보다 2~3일, 지난해보다는 하루 일찍 시작됐다. 장마는 고온다습한 북태평양 기단과 한랭 습윤한 오호츠크해 기단이 만나 형성되는 장마전선에서 내리는 비를 뜻한다.장마전선은 18일 제주도 먼 남쪽 해상으로 내려갔다가 20일께 다시 북상해 전남 남해안까지 영향을 줄 것으로 보인다. 이에 따라 20~21일 남부지방에도 예년보다 사흘 정도 장마가 일찍 찾아올 전망이다. 그러나 장마전선을 밀어올리는 북태평양 고기압 세력이 약해 서울 등 중부지방은 평년보다 사나흘가량 늦은 이달 말부터 장마가 시작될 것이라는 게 기상청의 설명이다. 장마전선은 이후 한 달가량 한반도 중남부를 오르내리며 곳곳에 비를 뿌릴 전망이다. 최근 30년간 평균치에 따르면 중부지방의 장마 시작일은 6월24~25일이었으며 장마기간은 32일, 강수일수는 17.2일이었다.기상청은 올해 장마기간의 평균 강수량이 350~400㎜로 평년과 비슷하거나 적을 것으로 내다봤다. 브라질 월드컵 한국과 러시아의 경기가 열리는 18일 오전 서울은 대체로 구름이 많이 끼지만 비는 오지 않을 것으로 예상돼 거리 응원에는 지장이 없을 전망이다.',
181
+ 'guid': 'klue-mrc-v1_train_12759',
182
+ 'is_impossible': False,
183
+ 'news_category': '종합',
184
+ 'question': '북태평양 기단과 오호츠크해 기단이 만나 국내에 머무르는 기간은?',
185
+ 'question_type': 1,
186
+ 'source': 'hankyung',
187
+ 'title': '제주도 장마 시작 … 중부는 이달 말부터'}
188
+ ```
189
+
190
+ #### wos
191
+ An example of 'train' looks as follows.
192
+
193
+ ```
194
+ {'dialogue': [{'role': 'user',
195
+ 'text': '쇼핑을 하려는데 서울 서쪽에 있을까요?',
196
+ 'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽']},
197
+ {'role': 'sys',
198
+ 'text': '서울 서쪽에 쇼핑이 가능한 곳이라면 노량진 수산물 도매시장이 있습니다.',
199
+ 'state': []},
200
+ {'role': 'user',
201
+ 'text': '오 네 거기 주소 좀 알려주세요.',
202
+ 'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
203
+ {'role': 'sys', 'text': '노량진 수산물 도매시장의 주소는 서울 동작구 93806입니다.', 'state': []},
204
+ {'role': 'user',
205
+ 'text': '알려주시는김에 연락처랑 평점도 좀 알려주세요.',
206
+ 'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
207
+ {'role': 'sys', 'text': '그럼. 연락처는 6182006591이고 평점은 4점입니다.', 'state': []},
208
+ {'role': 'user',
209
+ 'text': '와 감사합니다.',
210
+ 'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
211
+ {'role': 'sys', 'text': '감사합니다.', 'state': []}],
212
+ 'domains': ['관광'],
213
+ 'guid': 'wos-v1_train_00001'}
214
+ ```
215
+
216
+ ### Data Fields
217
+
218
+ #### ynat
219
+
220
+ + `guid`: a `string` feature
221
+ + `title`: a `string` feature
222
+ + `label`: a classification label, with possible values `IT과학`(0), `경제`(1), `사회`(2), `생활문화`(3), `세계`(4), `스포츠`(5), `정치`(6)
223
+ + `url`: a `string` feature
224
+ + `date`: a `string` feature
225
+
226
+ #### sts
227
+
228
+ + `guid`: a `string` feature
229
+ + `source`: a `string` feature
230
+ + `sentence1`: a `string` feature
231
+ + `sentence2`: a `string` feature
232
+ + `labels`: a dictionary feature containing
233
+ + `label`: a `float64` feature
234
+ + `real-label`: a `float64` feature
235
+ + `binary-label`: a classification label, with possible values `negative`(0), `positive`(1)
236
+
237
+ #### nli
238
+
239
+ + `guid`: a `string` feature
240
+ + `source`: a `string` feature
241
+ + `premise`: a `string` feature
242
+ + `hypothesis`: a `string` feature
243
+ + `label`: a classification label, with possible values `entailment`(0), `neutral`(1), `contradiction`(2)
244
+
245
+ #### ner
246
+
247
+ + `sentence`: a `string` feature
248
+ + `tokens`: a list of a `string` feature (tokenization is at character level)
249
+ + `ner_tags`: a list of classification labels, with possible values including `B-DT`(0), `I-DT`(1),
250
+ `B-LC`(2), `I-LC`(3), `B-OG`(4), `I-OG`(5), `B-PS`(6), `I-PS`(7), `B-QT`(8), `I-QT`(9), `B-TI`(10),
251
+ `I-TI`(11), `O`(12)
252
+
253
+ #### re
254
+
255
+ + `guid`: a `string` feature
256
+ + `sentence`: a `string` feature
257
+ + `subject_entity`: a dictionary feature containing
258
+ + `word`: a `string` feature
259
+ + `start_idx`: a `int32` feature
260
+ + `end_idx`: a `int32` feature
261
+ + `type`: a `string` feature
262
+ + `object_entity`: a dictionary feature containing
263
+ + `word`: a `string` feature
264
+ + `start_idx`: a `int32` feature
265
+ + `end_idx`: a `int32` feature
266
+ + `type`: a `string` feature
267
+ + `label`: a list of labels, with possible values including `no_relation`(0), `org:dissolved`(1),
268
+ `org:founded`(2), `org:place_of_headquarters`(3), `org:alternate_names`(4), `org:member_of`(5),
269
+ `org:members`(6), `org:political/religious_affiliation`(7), `org:product`(8), `org:founded_by`(9),`org:top_members/employees`(10),
270
+ `org:number_of_employees/members`(11), `per:date_of_birth`(12), `per:date_of_death`(13), `per:place_of_birth`(14),
271
+ `per:place_of_death`(15), `per:place_of_residence`(16), `per:origin`(17), `per:employee_of`(18),
272
+ `per:schools_attended`(19), `per:alternate_names`(20), `per:parents`(21), `per:children`(22),
273
+ `per:siblings`(23), `per:spouse`(24), `per:other_family`(25), `per:colleagues`(26), `per:product`(27),
274
+ `per:religion`(28), `per:title`(29),
275
+ + `source`: a `string` feature
276
+
277
+ #### dp
278
+
279
+ + `sentence`: a `string` feature
280
+ + `index`: a list of `int32` feature
281
+ + `word_form`: a list of `string` feature
282
+ + `lemma`: a list of `string` feature
283
+ + `pos`: a list of `string` feature
284
+ + `head`: a list of `int32` feature
285
+ + `deprel`: a list of `string` feature
286
+
287
+
288
+ #### mrc
289
+
290
+ + `title`: a `string` feature
291
+ + `context`: a `string` feature
292
+ + `news_category`: a `string` feature
293
+ + `source`: a `string` feature
294
+ + `guid`: a `string` feature
295
+ + `is_impossible`: a `bool` feature
296
+ + `question_type`: a `int32` feature
297
+ + `question`: a `string` feature
298
+ + `answers`: a dictionary feature containing
299
+ + `answer_start`: a `int32` feature
300
+ + `text`: a `string` feature
301
+
302
+
303
+ #### wos
304
+
305
+ + `guid`: a `string` feature
306
+ + `domains`: a `string` feature
307
+ + `dialogue`: a list of dictionary feature containing
308
+ + `role`: a `string` feature
309
+ + `text`: a `string` feature
310
+ + `state`: a `string` feature
311
+
312
+
313
+ ### Data Splits
314
+
315
+ #### ynat
316
+
317
+ You can see more details in [here](https://klue-benchmark.com/tasks/66/data/description).
318
+
319
+ + train: 45,678
320
+ + validation: 9,107
321
+
322
+
323
+ #### sts
324
+
325
+ You can see more details in [here](https://klue-benchmark.com/tasks/67/data/description).
326
+
327
+ + train: 11,668
328
+ + validation: 519
329
+
330
+ #### nli
331
+
332
+ You can see more details in [here](https://klue-benchmark.com/tasks/68/data/description).
333
+
334
+ + train: 24,998
335
+ + validation: 3,000
336
+
337
+ #### ner
338
+
339
+ You can see more details in [here](https://klue-benchmark.com/tasks/69/overview/description).
340
+
341
+ + train: 21,008
342
+ + validation: 5,000
343
+
344
+ #### re
345
+
346
+ You can see more details in [here](https://klue-benchmark.com/tasks/70/overview/description).
347
+
348
+ + train: 32,470
349
+ + validation: 7,765
350
+
351
+ #### dp
352
+
353
+ You can see more details in [here](https://klue-benchmark.com/tasks/71/data/description).
354
+
355
+ + train: 10,000
356
+ + validation: 2,000
357
+
358
+ #### mrc
359
+
360
+ You can see more details in [here](https://klue-benchmark.com/tasks/72/overview/description).
361
+
362
+ + train: 17,554
363
+ + validation: 5,841
364
+
365
+ #### wos
366
+
367
+ You can see more details in [here](https://klue-benchmark.com/tasks/73/overview/description).
368
+
369
+ + train: 8,000
370
+ + validation: 1,000
371
+
372
+
373
+ ## Dataset Creation
374
+
375
+ ### Curation Rationale
376
+
377
+ [Needs More Information]
378
+
379
+ ### Source Data
380
+
381
+ #### Initial Data Collection and Normalization
382
+
383
+ [Needs More Information]
384
+
385
+ #### Who are the source language producers?
386
+
387
+ [Needs More Information]
388
+
389
+ ### Annotations
390
+
391
+ #### Annotation process
392
+
393
+ [Needs More Information]
394
+
395
+ #### Who are the annotators?
396
+
397
+ [Needs More Information]
398
+
399
+ ### Personal and Sensitive Information
400
+
401
+ [Needs More Information]
402
+
403
+ ## Considerations for Using the Data
404
+
405
+ ### Social Impact of Dataset
406
+
407
+ [Needs More Information]
408
+
409
+ ### Discussion of Biases
410
+
411
+ [Needs More Information]
412
+
413
+ ### Other Known Limitations
414
+
415
+ [Needs More Information]
416
+
417
+ ## Additional Information
418
+
419
+ ### Dataset Curators
420
+
421
+ [Needs More Information]
422
+
423
+ ### Licensing Information
424
+
425
+ [Needs More Information]
426
+
427
+ ### Citation Information
428
+
429
+ ```
430
+ @misc{park2021klue,
431
+ title={KLUE: Korean Language Understanding Evaluation},
432
+ author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
433
+ year={2021},
434
+ eprint={2105.09680},
435
+ archivePrefix={arXiv},
436
+ primaryClass={cs.CL}
437
+ }
438
+ ```
439
+ ### Contributions
440
+
441
+ Thanks to [@jungwhank](https://github.com/jungwhank), [@bzantium](https://github.com/bzantium) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"ynat": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/66/overview/description", "license": "CC-BY-SA-4.0", "features": {"guid": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 7, "names": ["IT\uacfc\ud559", "\uacbd\uc81c", "\uc0ac\ud68c", "\uc0dd\ud65c\ubb38\ud654", "\uc138\uacc4", "\uc2a4\ud3ec\uce20", "\uc815\uce58"], "names_file": null, "id": null, "_type": "ClassLabel"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "ynat", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10109664, "num_examples": 45678, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 2039197, "num_examples": 9107, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000066/data/ynat-v1.tar.gz": {"num_bytes": 4932555, "checksum": "820a4d1d6d1fd83e2a421f856965d3cfc5c93627935ce8c5b27468c6113fc482"}}, "download_size": 4932555, "post_processing_size": null, "dataset_size": 12148861, "size_in_bytes": 17081416}, "sts": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/67/overview/description", "license": "CC-BY-SA-4.0", "features": {"guid": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}, "sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "labels": {"label": {"dtype": "float64", "id": null, "_type": "Value"}, "real-label": {"dtype": "float64", "id": null, "_type": "Value"}, "binary-label": {"num_classes": 2, "names": ["negative", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "sts", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2832921, "num_examples": 11668, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 122657, "num_examples": 519, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000067/data/klue-sts-v1.tar.gz": {"num_bytes": 1349875, "checksum": "539341ba78a3b351c686cf70a448ac7a5886ed95f0719d5e3d2378ba703213bd"}}, "download_size": 1349875, "post_processing_size": null, "dataset_size": 2955578, "size_in_bytes": 4305453}, "nli": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/68/overview/description", "license": "CC-BY-SA-4.0", "features": {"guid": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}, "premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "nli", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5719930, "num_examples": 24998, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 673276, "num_examples": 3000, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000068/data/klue-nli-v1.tar.gz": {"num_bytes": 1257374, "checksum": "388be2033ef712072201903795a35b4f86826ee3ed3b62dc0c98e1721baa8850"}}, "download_size": 1257374, "post_processing_size": null, "dataset_size": 6393206, "size_in_bytes": 7650580}, "ner": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/69/overview/description", "license": "CC-BY-SA-4.0", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 13, "names": ["B-DT", "I-DT", "B-LC", "I-LC", "B-OG", "I-OG", "B-PS", "I-PS", "B-QT", "I-QT", "B-TI", "I-TI", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "ner", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 19891953, "num_examples": 21008, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 4937579, "num_examples": 5000, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000069/data/klue-ner-v1.tar.gz": {"num_bytes": 4308644, "checksum": "848a89759ac6b7c149c9a00d820726fe2a140c22782201f1a40d856672e7ea8e"}}, "download_size": 4308644, "post_processing_size": null, "dataset_size": 24829532, "size_in_bytes": 29138176}, "re": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/70/overview/description", "license": "CC-BY-SA-4.0", "features": {"guid": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "subject_entity": {"word": {"dtype": "string", "id": null, "_type": "Value"}, "start_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "end_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}}, "object_entity": {"word": {"dtype": "string", "id": null, "_type": "Value"}, "start_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "end_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}}, "label": {"num_classes": 30, "names": ["no_relation", "org:dissolved", "org:founded", "org:place_of_headquarters", "org:alternate_names", "org:member_of", "org:members", "org:political/religious_affiliation", "org:product", "org:founded_by", "org:top_members/employees", "org:number_of_employees/members", "per:date_of_birth", "per:date_of_death", "per:place_of_birth", "per:place_of_death", "per:place_of_residence", "per:origin", "per:employee_of", "per:schools_attended", "per:alternate_names", "per:parents", "per:children", "per:siblings", "per:spouse", "per:other_family", "per:colleagues", "per:product", "per:religion", "per:title"], "names_file": null, "id": null, "_type": "ClassLabel"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "re", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11145538, "num_examples": 32470, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 2559300, "num_examples": 7765, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000070/data/klue-re-v1.tar.gz": {"num_bytes": 5669259, "checksum": "b09ceac0d986cc09e42fcda9c7f2873c0eea8ec0629baf91fead36580790f8f5"}}, "download_size": 5669259, "post_processing_size": null, "dataset_size": 13704838, "size_in_bytes": 19374097}, "dp": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/71/overview/description", "license": "CC-BY-SA-4.0", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "index": [{"dtype": "int32", "id": null, "_type": "Value"}], "word_form": [{"dtype": "string", "id": null, "_type": "Value"}], "lemma": [{"dtype": "string", "id": null, "_type": "Value"}], "pos": [{"dtype": "string", "id": null, "_type": "Value"}], "head": [{"dtype": "int32", "id": null, "_type": "Value"}], "deprel": [{"dtype": "string", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "dp", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7900009, "num_examples": 10000, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 1557506, "num_examples": 2000, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000071/data/klue-dp-v1.tar.gz": {"num_bytes": 2033461, "checksum": "2c76a3543a50599ac6640ad360ba00eac36e0b5b2363f708a614d6e50844d17b"}}, "download_size": 2033461, "post_processing_size": null, "dataset_size": 9457515, "size_in_bytes": 11490976}, "mrc": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/72/overview/description", "license": "CC-BY-SA-4.0", "features": {"title": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "news_category": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}, "guid": {"dtype": "string", "id": null, "_type": "Value"}, "is_impossible": {"dtype": "bool", "id": null, "_type": "Value"}, "question_type": {"dtype": "int32", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "mrc", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 46505665, "num_examples": 17554, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 15583053, "num_examples": 5841, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000072/data/klue-mrc-v1.tar.gz": {"num_bytes": 19218422, "checksum": "a444af252901452380d58a6320908ce4a86759bb6f38ad95d0ca98584ad33d14"}}, "download_size": 19218422, "post_processing_size": null, "dataset_size": 62088718, "size_in_bytes": 81307140}, "wos": {"description": "KLUE (Korean Language Understanding Evaluation)\nKorean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language\nunderstanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible\nto anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain\nunambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics\nfor every task, thus establishing fair comparison across Korean language models.\n", "citation": "@misc{park2021klue,\n title={KLUE: Korean Language Understanding Evaluation},\n author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},\n year={2021},\n eprint={2105.09680},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://klue-benchmark.com/tasks/73/overview/description", "license": "CC-BY-SA-4.0", "features": {"guid": {"dtype": "string", "id": null, "_type": "Value"}, "domains": [{"dtype": "string", "id": null, "_type": "Value"}], "dialogue": [{"role": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "state": [{"dtype": "string", "id": null, "_type": "Value"}]}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "klue", "config_name": "wos", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 26677002, "num_examples": 8000, "dataset_name": "klue"}, "validation": {"name": "validation", "num_bytes": 3488943, "num_examples": 1000, "dataset_name": "klue"}}, "download_checksums": {"http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000073/data/wos-v1.tar.gz": {"num_bytes": 4785657, "checksum": "da17829300271560afc6e7fc330503c2ca6f7ae7721d9bb94308579542a5871f"}}, "download_size": 4785657, "post_processing_size": null, "dataset_size": 30165945, "size_in_bytes": 34951602}}
dummy/dp/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f68428f7376efdd7b1ced3a41e2276ea5b78af993c33ff6437ea37478fe150d7
3
+ size 2085
dummy/mrc/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:608f06472da3a89a64a9e60a926a6d61c5b6bf5e8b7da5c12f74168bf45afe70
3
+ size 3306
dummy/ner/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b13d84876433d36a7cd6a063e1140427046ec46e81a2a771135860789493b32a
3
+ size 1950
dummy/nli/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a19d1a92e3acc1e68e435243199e2fca98d1e5a8fdb397315b924f7f122c31b
3
+ size 1767
dummy/re/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea6a2fabc3378ba3dc8e6b233ae2c11d83642d72ff44a345ed7a66cdddfccda9
3
+ size 2982
dummy/sts/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a176a30bde7e52eb5c6d47f5f3a210ca1150b1c59c122c38d487b60f9c8daa50
3
+ size 2790
dummy/wos/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1e736318e979928cdbe1d69a751f436aac60b9d503707b369ae2d1352162c1a
3
+ size 6578
dummy/ynat/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2f02f6e8ebe7660d3548d53f27b9382d0301a39e3c08c1872db5513ee996c2a
3
+ size 2400
klue.py ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """KLUE (Korean Language Understanding Evaluation) benchmark."""
17
+
18
+
19
+ import csv
20
+ import json
21
+ import os
22
+ import textwrap
23
+
24
+ import datasets
25
+
26
+
27
+ _KLUE_CITATION = """\
28
+ @misc{park2021klue,
29
+ title={KLUE: Korean Language Understanding Evaluation},
30
+ author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
31
+ year={2021},
32
+ eprint={2105.09680},
33
+ archivePrefix={arXiv},
34
+ primaryClass={cs.CL}
35
+ }
36
+ """
37
+
38
+ _KLUE_DESCRIPTION = """\
39
+ KLUE (Korean Language Understanding Evaluation)
40
+ Korean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language
41
+ understanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible
42
+ to anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain
43
+ unambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics
44
+ for every task, thus establishing fair comparison across Korean language models.
45
+ """
46
+
47
+ _DATA_URLs = {
48
+ "ynat": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000066/data/ynat-v1.tar.gz",
49
+ "sts": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000067/data/klue-sts-v1.tar.gz",
50
+ "nli": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000068/data/klue-nli-v1.tar.gz",
51
+ "ner": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000069/data/klue-ner-v1.tar.gz",
52
+ "re": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000070/data/klue-re-v1.tar.gz",
53
+ "dp": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000071/data/klue-dp-v1.tar.gz",
54
+ "mrc": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000072/data/klue-mrc-v1.tar.gz",
55
+ "wos": "http://klue-benchmark.com.s3.amazonaws.com/app/Competitions/000073/data/wos-v1.tar.gz",
56
+ }
57
+
58
+ _DESCRIPTION_URLs = {
59
+ "ynat": "https://klue-benchmark.com/tasks/66/overview/description",
60
+ "sts": "https://klue-benchmark.com/tasks/67/overview/description",
61
+ "nli": "https://klue-benchmark.com/tasks/68/overview/description",
62
+ "ner": "https://klue-benchmark.com/tasks/69/overview/description",
63
+ "re": "https://klue-benchmark.com/tasks/70/overview/description",
64
+ "dp": "https://klue-benchmark.com/tasks/71/overview/description",
65
+ "mrc": "https://klue-benchmark.com/tasks/72/overview/description",
66
+ "wos": "https://klue-benchmark.com/tasks/73/overview/description",
67
+ }
68
+
69
+ _LICENSE = "CC-BY-SA-4.0"
70
+
71
+
72
+ class KlueConfig(datasets.BuilderConfig):
73
+ """BuilderConfig for KLUE."""
74
+
75
+ def __init__(
76
+ self,
77
+ features,
78
+ data_url,
79
+ url,
80
+ file_map,
81
+ **kwargs,
82
+ ):
83
+ """BuilderConfig for KLUE."""
84
+
85
+ super(KlueConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
86
+ self.features = features
87
+ self.data_url = data_url
88
+ self.url = url
89
+ self.file_map = file_map
90
+
91
+
92
+ class Klue(datasets.GeneratorBasedBuilder):
93
+ """The General Language Understanding Evaluation (GLUE) benchmark."""
94
+
95
+ BUILDER_CONFIGS = [
96
+ KlueConfig(
97
+ name="ynat",
98
+ features={
99
+ "guid": datasets.Value("string"),
100
+ "title": datasets.Value("string"),
101
+ "label": datasets.features.ClassLabel(names=["IT과학", "경제", "사회", "생활문화", "세계", "스포츠", "정치"]),
102
+ "url": datasets.Value("string"),
103
+ "date": datasets.Value("string"),
104
+ },
105
+ description=textwrap.dedent(
106
+ """\
107
+ In topic classification (TC), the goal is to predict the topic of a given text
108
+ snippet. We include TC in our KLUE benchmark, as inferring the topic of a text is a key
109
+ capability that should be possessed by a language understanding system.
110
+ Following a typical single sentence classification task, we introduce YNAT, a Younhap
111
+ News Agency news headlines for Topic Classification. For Korean, no dataset has been
112
+ proposed for this task, which motivates us to construct the first Korean topic
113
+ classification benchmark. In this task, given a news headline, a text classifier must
114
+ predict a topic which is one of politics, economy, society, culture, world, IT/science,
115
+ and sports. Macro-F1 score is used to evaluate a system."""
116
+ ),
117
+ data_url=_DATA_URLs["ynat"],
118
+ url=_DESCRIPTION_URLs["ynat"],
119
+ file_map={
120
+ "train": "ynat-v1_train.json",
121
+ "dev": "ynat-v1_dev.json",
122
+ },
123
+ ),
124
+ KlueConfig(
125
+ name="sts",
126
+ features={
127
+ "guid": datasets.Value("string"),
128
+ "source": datasets.Value("string"),
129
+ "sentence1": datasets.Value("string"),
130
+ "sentence2": datasets.Value("string"),
131
+ "labels": {
132
+ "label": datasets.Value("float64"),
133
+ "real-label": datasets.Value("float64"),
134
+ "binary-label": datasets.ClassLabel(names=["negative", "positive"]),
135
+ },
136
+ },
137
+ description=textwrap.dedent(
138
+ """\
139
+ STS is a task which aims to predict the semantic similarity of two input sentences as
140
+ a real value between 0 and 5. Note that we furthure binarized the prediction scores
141
+ into two classes with a threshold score 3.0 (paraphrased or not) and evaluated with
142
+ a classification metric.
143
+ """
144
+ ),
145
+ data_url=_DATA_URLs["sts"],
146
+ url=_DESCRIPTION_URLs["sts"],
147
+ file_map={
148
+ "train": "klue-sts-v1_train.json",
149
+ "dev": "klue-sts-v1_dev.json",
150
+ },
151
+ ),
152
+ KlueConfig(
153
+ name="nli",
154
+ features={
155
+ "guid": datasets.Value("string"),
156
+ "source": datasets.Value("string"),
157
+ "premise": datasets.Value("string"),
158
+ "hypothesis": datasets.Value("string"),
159
+ "label": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
160
+ },
161
+ description=textwrap.dedent(
162
+ """\
163
+ NLI is a task to infer the relationship between a hypothesis sentence and a premise
164
+ sentence. Given the premise, the model determines if the hypothesis is true (entailment),
165
+ false (contradiction), or undetermined (neutral).
166
+ """
167
+ ),
168
+ data_url=_DATA_URLs["nli"],
169
+ url=_DESCRIPTION_URLs["nli"],
170
+ file_map={
171
+ "train": "klue-nli-v1_train.json",
172
+ "dev": "klue-nli-v1_dev.json",
173
+ },
174
+ ),
175
+ KlueConfig(
176
+ name="ner",
177
+ features={
178
+ "sentence": datasets.Value("string"),
179
+ "tokens": datasets.Sequence(datasets.Value("string")),
180
+ "ner_tags": datasets.Sequence(
181
+ datasets.ClassLabel(
182
+ names=[
183
+ "B-DT",
184
+ "I-DT",
185
+ "B-LC",
186
+ "I-LC",
187
+ "B-OG",
188
+ "I-OG",
189
+ "B-PS",
190
+ "I-PS",
191
+ "B-QT",
192
+ "I-QT",
193
+ "B-TI",
194
+ "I-TI",
195
+ "O",
196
+ ]
197
+ )
198
+ ),
199
+ },
200
+ description=textwrap.dedent(
201
+ """\
202
+ NER is a task to detect the boundaries of named entities in unstructured text and to
203
+ classify the types. A named entity can be of one of predefined entity types such as
204
+ person, location, organization, time expressions, quantities and monetary values.
205
+ """
206
+ ),
207
+ data_url=_DATA_URLs["ner"],
208
+ url=_DESCRIPTION_URLs["ner"],
209
+ file_map={
210
+ "train": "klue-ner-v1_train.tsv",
211
+ "dev": "klue-ner-v1_dev.tsv",
212
+ },
213
+ ),
214
+ KlueConfig(
215
+ name="re",
216
+ features={
217
+ "guid": datasets.Value("string"),
218
+ "sentence": datasets.Value("string"),
219
+ "subject_entity": {
220
+ "word": datasets.Value("string"),
221
+ "start_idx": datasets.Value("int32"),
222
+ "end_idx": datasets.Value("int32"),
223
+ "type": datasets.Value("string"),
224
+ },
225
+ "object_entity": {
226
+ "word": datasets.Value("string"),
227
+ "start_idx": datasets.Value("int32"),
228
+ "end_idx": datasets.Value("int32"),
229
+ "type": datasets.Value("string"),
230
+ },
231
+ "label": datasets.ClassLabel(
232
+ names=[
233
+ "no_relation",
234
+ "org:dissolved",
235
+ "org:founded",
236
+ "org:place_of_headquarters",
237
+ "org:alternate_names",
238
+ "org:member_of",
239
+ "org:members",
240
+ "org:political/religious_affiliation",
241
+ "org:product",
242
+ "org:founded_by",
243
+ "org:top_members/employees",
244
+ "org:number_of_employees/members",
245
+ "per:date_of_birth",
246
+ "per:date_of_death",
247
+ "per:place_of_birth",
248
+ "per:place_of_death",
249
+ "per:place_of_residence",
250
+ "per:origin",
251
+ "per:employee_of",
252
+ "per:schools_attended",
253
+ "per:alternate_names",
254
+ "per:parents",
255
+ "per:children",
256
+ "per:siblings",
257
+ "per:spouse",
258
+ "per:other_family",
259
+ "per:colleagues",
260
+ "per:product",
261
+ "per:religion",
262
+ "per:title",
263
+ ]
264
+ ),
265
+ "source": datasets.Value("string"),
266
+ },
267
+ description=textwrap.dedent(
268
+ """\
269
+ RE is a task to identify semantic relations between entity pairs in a text. The relation
270
+ is defined between an entity pair consisting of subject entity and object entity.
271
+ The goal is then to pick an appropriate relationship between these two entities.
272
+ """
273
+ ),
274
+ data_url=_DATA_URLs["re"],
275
+ url=_DESCRIPTION_URLs["re"],
276
+ file_map={
277
+ "train": "klue-re-v1_train.json",
278
+ "dev": "klue-re-v1_dev.json",
279
+ },
280
+ ),
281
+ KlueConfig(
282
+ name="dp",
283
+ features={
284
+ "sentence": datasets.Value("string"),
285
+ "index": [datasets.Value("int32")],
286
+ "word_form": [datasets.Value("string")],
287
+ "lemma": [datasets.Value("string")],
288
+ "pos": [datasets.Value("string")],
289
+ "head": [datasets.Value("int32")],
290
+ "deprel": [datasets.Value("string")],
291
+ },
292
+ description=textwrap.dedent(
293
+ """\
294
+ DP is a task that aims at finding relational information among words.
295
+ The goal is to predict a graph structure and a dependency label of an input sentence
296
+ based on the dependency grammar.
297
+ """
298
+ ),
299
+ data_url=_DATA_URLs["dp"],
300
+ url=_DESCRIPTION_URLs["dp"],
301
+ file_map={
302
+ "train": "klue-dp-v1_train.tsv",
303
+ "dev": "klue-dp-v1_dev.tsv",
304
+ },
305
+ ),
306
+ KlueConfig(
307
+ name="mrc",
308
+ features={
309
+ "title": datasets.Value("string"),
310
+ "context": datasets.Value("string"),
311
+ "news_category": datasets.Value("string"),
312
+ "source": datasets.Value("string"),
313
+ "guid": datasets.Value("string"),
314
+ "is_impossible": datasets.Value("bool"),
315
+ "question_type": datasets.Value("int32"),
316
+ "question": datasets.Value("string"),
317
+ "answers": datasets.features.Sequence(
318
+ {
319
+ "answer_start": datasets.Value("int32"),
320
+ "text": datasets.Value("string"),
321
+ },
322
+ ),
323
+ },
324
+ description=textwrap.dedent(
325
+ """\
326
+ MRC is a task of evaluating model that can answer a question about a given text
327
+ passage. Specifically, we formulate the task as a span prediction task, where the
328
+ answer is a text segment (coined as spans) in the passage.
329
+ """
330
+ ),
331
+ data_url=_DATA_URLs["mrc"],
332
+ url=_DESCRIPTION_URLs["mrc"],
333
+ file_map={
334
+ "train": "klue-mrc-v1_train.json",
335
+ "dev": "klue-mrc-v1_dev.json",
336
+ },
337
+ ),
338
+ KlueConfig(
339
+ name="wos",
340
+ features={
341
+ "guid": datasets.Value("string"),
342
+ "domains": [datasets.Value("string")],
343
+ "dialogue": [
344
+ {
345
+ "role": datasets.Value("string"),
346
+ "text": datasets.Value("string"),
347
+ "state": [datasets.Value("string")],
348
+ }
349
+ ],
350
+ },
351
+ description=textwrap.dedent(
352
+ """\
353
+ DST is a task to predict slot and value pairs (dialogue states) from a task-oriented
354
+ dialogue. The potential pairs are predefined by a given task schema and knowledge
355
+ base (KB).
356
+ """
357
+ ),
358
+ data_url=_DATA_URLs["wos"],
359
+ url=_DESCRIPTION_URLs["wos"],
360
+ file_map={
361
+ "train": "wos-v1_train.json",
362
+ "dev": "wos-v1_dev.json",
363
+ },
364
+ ),
365
+ ]
366
+
367
+ def _info(self):
368
+ return datasets.DatasetInfo(
369
+ description=_KLUE_DESCRIPTION,
370
+ features=datasets.Features(self.config.features),
371
+ homepage=self.config.url,
372
+ citation=_KLUE_CITATION,
373
+ license=_LICENSE,
374
+ )
375
+
376
+ def _split_generators(self, dl_manager):
377
+ dl_dir = dl_manager.download_and_extract(self.config.data_url)
378
+ dir_name = self.config.data_url.split("/")[-1].replace(".tar.gz", "")
379
+ data_dir = os.path.join(dl_dir, dir_name)
380
+ return [
381
+ datasets.SplitGenerator(
382
+ name=datasets.Split.TRAIN,
383
+ gen_kwargs={
384
+ "data_file": os.path.join(data_dir, self.config.file_map["train"]),
385
+ "split": "train",
386
+ },
387
+ ),
388
+ datasets.SplitGenerator(
389
+ name=datasets.Split.VALIDATION,
390
+ gen_kwargs={
391
+ "data_file": os.path.join(data_dir, self.config.file_map["dev"]),
392
+ "split": "dev",
393
+ },
394
+ ),
395
+ ]
396
+
397
+ def _generate_examples(self, data_file, split):
398
+ if self.config.name in ["ynat", "sts", "re"]:
399
+ with open(data_file, encoding="UTF-8") as f:
400
+ f = json.load(f)
401
+ for id_, row in enumerate(f):
402
+ features = {key: row[key] for key in row if key in self.config.features}
403
+ yield id_, features
404
+
405
+ if self.config.name == "nli":
406
+ with open(data_file, encoding="UTF-8") as f:
407
+ f = json.load(f)
408
+ for id_, row in enumerate(f):
409
+ # In train file, "source" is written as "genre"
410
+ features = {
411
+ "guid": row["guid"],
412
+ "source": row["source"] if "source" in row else row["genre"],
413
+ "premise": row["premise"],
414
+ "hypothesis": row["hypothesis"],
415
+ "label": row["gold_label"],
416
+ }
417
+ yield id_, features
418
+
419
+ if self.config.name == "ner":
420
+ with open(data_file, encoding="UTF-8") as f:
421
+ reader = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
422
+ for _ in range(5): # skip headers
423
+ next(reader)
424
+ id_ = -1
425
+ for row in reader:
426
+ if row:
427
+ if row[0].startswith("##"):
428
+ id_ += 1
429
+ tokens, ner_tags = [], []
430
+ sentence = row[1]
431
+ else:
432
+ tokens.append(row[0])
433
+ ner_tags.append(row[1])
434
+ else: # new line
435
+ assert len(tokens) == len(ner_tags)
436
+ yield id_, {"sentence": sentence, "tokens": tokens, "ner_tags": ner_tags}
437
+
438
+ if self.config.name == "dp":
439
+ with open(data_file, encoding="UTF-8") as f:
440
+ reader = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
441
+ for _ in range(5): # skip headers
442
+ next(reader)
443
+ id_ = -1
444
+ for row in reader:
445
+ if row:
446
+ if row[0].startswith("##"):
447
+ id_ += 1
448
+ index = []
449
+ word_form = []
450
+ lemma = []
451
+ pos = []
452
+ head = []
453
+ deprel = []
454
+ sentence = row[1]
455
+ else:
456
+ index.append(row[0])
457
+ word_form.append(row[1])
458
+ lemma.append(row[2])
459
+ pos.append(row[3])
460
+ head.append(row[4])
461
+ deprel.append(row[5])
462
+ else: # new line
463
+ assert len(index) == len(word_form) == len(lemma) == len(pos) == len(head) == len(deprel)
464
+ yield id_, {
465
+ "sentence": sentence,
466
+ "index": index,
467
+ "word_form": word_form,
468
+ "lemma": lemma,
469
+ "pos": pos,
470
+ "head": head,
471
+ "deprel": deprel,
472
+ }
473
+
474
+ if self.config.name == "mrc":
475
+ with open(data_file, encoding="UTF-8") as f:
476
+ f = json.load(f)
477
+ id_ = -1
478
+ for example in f["data"]:
479
+ title = example.get("title", "")
480
+ news_category = example.get("news_category", "")
481
+ source = example["source"]
482
+ for paragraph in example["paragraphs"]:
483
+ context = paragraph["context"].strip()
484
+ for qa in paragraph["qas"]:
485
+ guid = qa["guid"]
486
+ question_type = qa["question_type"]
487
+ is_impossible = qa["is_impossible"]
488
+ question = qa["question"].strip()
489
+
490
+ if "plausible_answers" in qa:
491
+ qa["answers"].extend(qa["plausible_answers"])
492
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
493
+ answers = [answer["text"].strip() for answer in qa["answers"]]
494
+ id_ += 1
495
+
496
+ yield id_, {
497
+ "guid": guid,
498
+ "title": title,
499
+ "context": context,
500
+ "news_category": news_category,
501
+ "source": source,
502
+ "question_type": question_type,
503
+ "is_impossible": is_impossible,
504
+ "question": question,
505
+ "answers": {
506
+ "answer_start": answer_starts,
507
+ "text": answers,
508
+ },
509
+ }
510
+
511
+ if self.config.name == "wos":
512
+ with open(data_file, encoding="UTF-8") as f:
513
+ f = json.load(f)
514
+ for id_, row in enumerate(f):
515
+ guid = row["guid"]
516
+ domains = row["domains"]
517
+ dialogue = row["dialogue"]
518
+ for utterance in dialogue:
519
+ if "state" not in utterance:
520
+ utterance["state"] = []
521
+ yield id_, {"guid": guid, "domains": domains, "dialogue": dialogue}