Datasets:

Languages:
Korean
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
fd8b0e4
0 Parent(s):

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +158 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. kor_sae.py +91 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - ko
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - intent-classification
20
+ ---
21
+
22
+ # Dataset Card for Structured Argument Extraction for Korean
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits](#data-splits)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+ - [Contributions](#contributions)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage: [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k)**
51
+ - **Repository: [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k)**
52
+ - **Paper: [Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives](https://arxiv.org/abs/1912.00342)**
53
+ - **Point of Contact: [Won Ik Cho](wicho@hi.snu.ac.kr)**
54
+
55
+ ### Dataset Summary
56
+
57
+ The Structured Argument Extraction for Korean dataset is a set of question-argument and command-argument pairs with their respective question type label and negativeness label. Often times, agents like Alexa or Siri, encounter conversations without a clear objective from the user. The goal of this dataset is to extract the intent argument of a given utterance pair without a clear directive. This may yield a more robust agent capable of parsing more non-canonical forms of speech.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ * `intent_classification`: The dataset can be trained with a Transformer like [BERT](https://huggingface.co/bert-base-uncased) to classify the intent argument or a question/command pair in Korean, and it's performance can be measured by it's BERTScore.
62
+
63
+ ### Languages
64
+
65
+ The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ An example data instance contains a question or command pair and its label:
72
+
73
+ ```
74
+ {
75
+ "intent_pair1": "내일 오후 다섯시 조별과제 일정 추가해줘"
76
+ "intent_pair2": "내일 오후 다섯시 조별과제 일정 추가하기"
77
+ "label": 4
78
+ }
79
+ ```
80
+
81
+ ### Data Fields
82
+
83
+ * `intent_pair1`: a question/command pair
84
+ * `intent_pair2`: a corresponding question/command pair
85
+ * `label`: determines the intent argument of the pair and can be one of `yes/no` (0), `alternative` (1), `wh- questions` (2), `prohibitions` (3), `requirements` (4) and `strong requirements` (5)
86
+
87
+ ### Data Splits
88
+
89
+ The corpus contains 30,837 examples.
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ The Structured Argument Extraction for Korean dataset was curated to help train models extract intent arguments from utterances without a clear objective or when the user uses non-canonical forms of speech. This is especially helpful in Korean because in English, the `Who, what, where, when and why` usually comes in the beginning, but this isn't necessarily the case in the Korean language. So for low-resource languages, this lack of data can be a bottleneck for comprehension performance.
96
+
97
+ ### Source Data
98
+
99
+ #### Initial Data Collection and Normalization
100
+
101
+ The corpus was taken from the one constructed by [Cho et al.](https://arxiv.org/abs/1811.04231), a Korean single utterance corpus for identifying directives/non-directives that contains a wide variety of non-canonical directives.
102
+
103
+ #### Who are the source language producers?
104
+
105
+ Korean speakers are the source language producers.
106
+
107
+ ### Annotations
108
+
109
+ #### Annotation process
110
+
111
+ Utterances were categorized as question or command arguments and then further classified according to their intent argument.
112
+
113
+ #### Who are the annotators?
114
+
115
+ The annotation was done by three Korean natives with a background in computational linguistics.
116
+
117
+ ### Personal and Sensitive Information
118
+
119
+ [More Information Needed]
120
+
121
+ ## Considerations for Using the Data
122
+
123
+ ### Social Impact of Dataset
124
+
125
+ [More Information Needed]
126
+
127
+ ### Discussion of Biases
128
+
129
+ [More Information Needed]
130
+
131
+ ### Other Known Limitations
132
+
133
+ [More Information Needed]
134
+
135
+ ## Additional Information
136
+
137
+ ### Dataset Curators
138
+
139
+ The dataset is curated by Won Ik Cho, Young Ki Moon, Sangwhan Moon, Seok Min Kim and Nam Soo Kim.
140
+
141
+ ### Licensing Information
142
+
143
+ The dataset is licensed under the CC BY-SA-4.0.
144
+
145
+ ### Citation Information
146
+
147
+ ```
148
+ @article{cho2019machines,
149
+ title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives},
150
+ author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo},
151
+ journal={arXiv preprint arXiv:1912.00342},
152
+ year={2019}
153
+ }
154
+ ```
155
+
156
+ ### Contributions
157
+
158
+ Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This new dataset is designed to extract intent from non-canonical directives which will help dialog managers\nextract intent from user dialog that may have no clear objective or are paraphrased forms of utterances.\n", "citation": "@article{cho2019machines,\n title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives},\n author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo},\n journal={arXiv preprint arXiv:1912.00342},\n year={2019}\n}\n", "homepage": "https://github.com/warnikchow/sae4k", "license": "CC-BY-SA-4.0", "features": {"intent_pair1": {"dtype": "string", "id": null, "_type": "Value"}, "intent_pair2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 6, "names": ["yes/no", "alternative", "wh- questions", "prohibitions", "requirements", "strong requirements"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kor_sae", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2885167, "num_examples": 30837, "dataset_name": "kor_sae"}}, "download_checksums": {"https://raw.githubusercontent.com/warnikchow/sae4k/master/data/sae4k_v1.txt": {"num_bytes": 2545926, "checksum": "529361e1aa760ca90db71fc70a93215f45938028735aba1291a907f764fe1f36"}}, "download_size": 2545926, "post_processing_size": null, "dataset_size": 2885167, "size_in_bytes": 5431093}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:201218de592216b69d0ec5b294f7039d9c5c24d51d296a09ecd4186b5f1a9d9b
3
+ size 478
kor_sae.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Structured Argument Extraction for Korean"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @article{cho2019machines,
26
+ title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives},
27
+ author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo},
28
+ journal={arXiv preprint arXiv:1912.00342},
29
+ year={2019}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ This new dataset is designed to extract intent from non-canonical directives which will help dialog managers
35
+ extract intent from user dialog that may have no clear objective or are paraphrased forms of utterances.
36
+ """
37
+
38
+ _HOMEPAGE = "https://github.com/warnikchow/sae4k"
39
+
40
+ _LICENSE = "CC-BY-SA-4.0"
41
+
42
+ _TRAIN_DOWNLOAD_URL = "https://raw.githubusercontent.com/warnikchow/sae4k/master/data/sae4k_v1.txt"
43
+
44
+
45
+ class KorSae(datasets.GeneratorBasedBuilder):
46
+ """Structured Argument Extraction for Korean"""
47
+
48
+ VERSION = datasets.Version("1.1.0")
49
+
50
+ def _info(self):
51
+
52
+ return datasets.DatasetInfo(
53
+ description=_DESCRIPTION,
54
+ features=datasets.Features(
55
+ {
56
+ "intent_pair1": datasets.Value("string"),
57
+ "intent_pair2": datasets.Value("string"),
58
+ "label": datasets.features.ClassLabel(
59
+ names=[
60
+ "yes/no",
61
+ "alternative",
62
+ "wh- questions",
63
+ "prohibitions",
64
+ "requirements",
65
+ "strong requirements",
66
+ ]
67
+ ),
68
+ }
69
+ ),
70
+ supervised_keys=None,
71
+ homepage=_HOMEPAGE,
72
+ license=_LICENSE,
73
+ citation=_CITATION,
74
+ )
75
+
76
+ def _split_generators(self, dl_manager):
77
+ """Returns SplitGenerators."""
78
+
79
+ train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
80
+ return [
81
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
82
+ ]
83
+
84
+ def _generate_examples(self, filepath):
85
+ """Generate KorSAE examples"""
86
+
87
+ with open(filepath, encoding="utf-8") as csv_file:
88
+ data = csv.reader(csv_file, delimiter="\t")
89
+ for id_, row in enumerate(data):
90
+ intent_pair1, intent_pair2, label = row
91
+ yield id_, {"intent_pair1": intent_pair1, "intent_pair2": intent_pair2, "label": int(label)}