system HF staff commited on
Commit
106d0ac
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +185 -0
  3. bsd_ja_en.py +164 -0
  4. dataset_infos.json +1 -0
  5. dummy/1.0.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ - ja
9
+ licenses:
10
+ - cc-by-nc-sa-4-0
11
+ multilinguality:
12
+ - translation
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - machine-translation
21
+ ---
22
+
23
+ # Dataset Card for Business Scene Dialogue
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [Github](https://raw.githubusercontent.com/tsuruoka-lab/BSD/)
51
+ - **Repository:** [Github](https://raw.githubusercontent.com/tsuruoka-lab/BSD/)
52
+ - **Paper:** [Rikters et al., 2019](https://www.aclweb.org/anthology/D19-5204)
53
+ - **Leaderboard:**
54
+ - **Point of Contact:** Matīss Rikters
55
+
56
+ ### Dataset Summary
57
+ This is the Business Scene Dialogue (BSD) dataset,
58
+ a Japanese-English parallel corpus containing written conversations
59
+ in various business scenarios.
60
+
61
+ The dataset was constructed in 3 steps:
62
+ 1) selecting business scenes,
63
+ 2) writing monolingual conversation scenarios according to the selected scenes, and
64
+ 3) translating the scenarios into the other language.
65
+
66
+ Half of the monolingual scenarios were written in Japanese
67
+ and the other half were written in English.
68
+
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ [More Information Needed]
73
+
74
+ ### Languages
75
+ English, Japanese.
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+ Each instance contains a conversation identifier, a sentence number that indicates its
81
+ position within the conversation, speaker name in English and Japanese,
82
+ text in English and Japanese, original language, scene of the scenario (tag),
83
+ and title of the scenario (title).
84
+ ```python
85
+ {
86
+ "id": "190315_E004_13",
87
+ "no": 14,
88
+ "speaker": "Mr. Sam Lee",
89
+ "ja_speaker": "サム リーさん",
90
+ "en_sentence": "Would you guys consider a different scheme?",
91
+ "ja_sentence": "別の事業案も考慮されますか?",
92
+ "original_language": "en",
93
+ "tag": "phone call",
94
+ "title": "Phone: Review spec and scheme"
95
+ }
96
+ ```
97
+
98
+ ### Data Fields
99
+ - id: dialogue identifier
100
+ - no: sentence pair number within a dialogue
101
+ - en_speaker: speaker name in English
102
+ - ja_speaker: speaker name in Japanese
103
+ - en_sentence: sentence in English
104
+ - ja_sentence: sentence in Japanese
105
+ - original_language: language in which monolingual scenario was written
106
+ - tag: scenario
107
+ - title: scenario title
108
+
109
+ ### Data Splits
110
+ - There are a total of 24171 sentences / 808 business scenarios.
111
+ - Train: 20000 sentences / 670 scenarios
112
+ - Dev: 2051 sentences / 69 scenarios
113
+ - Test: 2120 sentences / 69 scenarios
114
+
115
+ ## Dataset Creation
116
+
117
+ ### Curation Rationale
118
+
119
+ [More Information Needed]
120
+
121
+ ### Source Data
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ [More Information Needed]
126
+
127
+ #### Who are the source language producers?
128
+
129
+ [More Information Needed]
130
+
131
+ ### Annotations
132
+
133
+ #### Annotation process
134
+
135
+ [More Information Needed]
136
+
137
+ #### Who are the annotators?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Personal and Sensitive Information
142
+
143
+ [More Information Needed]
144
+
145
+ ## Considerations for Using the Data
146
+
147
+ ### Social Impact of Dataset
148
+
149
+ [More Information Needed]
150
+
151
+ ### Discussion of Biases
152
+
153
+ [More Information Needed]
154
+
155
+ ### Other Known Limitations
156
+
157
+ [More Information Needed]
158
+
159
+ ## Additional Information
160
+
161
+ ### Dataset Curators
162
+
163
+ [More Information Needed]
164
+
165
+ ### Licensing Information
166
+ This dataset was released under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license.
167
+
168
+ ### Citation Information
169
+ ```
170
+ @inproceedings{rikters-etal-2019-designing,
171
+ title = "Designing the Business Conversation Corpus",
172
+ author = "Rikters, Mat{\=\i}ss and
173
+ Ri, Ryokan and
174
+ Li, Tong and
175
+ Nakazawa, Toshiaki",
176
+ booktitle = "Proceedings of the 6th Workshop on Asian Translation",
177
+ month = nov,
178
+ year = "2019",
179
+ address = "Hong Kong, China",
180
+ publisher = "Association for Computational Linguistics",
181
+ url = "https://www.aclweb.org/anthology/D19-5204",
182
+ doi = "10.18653/v1/D19-5204",
183
+ pages = "54--61"
184
+ }
185
+ ```
bsd_ja_en.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Japanese-English Business Scene Dialogue (BSD) dataset. """
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{rikters-etal-2019-designing,
26
+ title = "Designing the Business Conversation Corpus",
27
+ author = "Rikters, Matīss and
28
+ Ri, Ryokan and
29
+ Li, Tong and
30
+ Nakazawa, Toshiaki",
31
+ booktitle = "Proceedings of the 6th Workshop on Asian Translation",
32
+ month = nov,
33
+ year = "2019",
34
+ address = "Hong Kong, China",
35
+ publisher = "Association for Computational Linguistics",
36
+ url = "https://www.aclweb.org/anthology/D19-5204",
37
+ doi = "10.18653/v1/D19-5204",
38
+ pages = "54--61"
39
+ }
40
+ """
41
+
42
+
43
+ _DESCRIPTION = """\
44
+ This is the Business Scene Dialogue (BSD) dataset,
45
+ a Japanese-English parallel corpus containing written conversations
46
+ in various business scenarios.
47
+
48
+ The dataset was constructed in 3 steps:
49
+ 1) selecting business scenes,
50
+ 2) writing monolingual conversation scenarios according to the selected scenes, and
51
+ 3) translating the scenarios into the other language.
52
+
53
+ Half of the monolingual scenarios were written in Japanese
54
+ and the other half were written in English.
55
+
56
+ Fields:
57
+ - id: dialogue identifier
58
+ - no: sentence pair number within a dialogue
59
+ - en_speaker: speaker name in English
60
+ - ja_speaker: speaker name in Japanese
61
+ - en_sentence: sentence in English
62
+ - ja_sentence: sentence in Japanese
63
+ - original_language: language in which monolingual scenario was written
64
+ - tag: scenario
65
+ - title: scenario title
66
+ """
67
+
68
+ _HOMEPAGE = "https://github.com/tsuruoka-lab/BSD"
69
+
70
+ _LICENSE = "CC BY-NC-SA 4.0"
71
+
72
+ _REPO = "https://raw.githubusercontent.com/tsuruoka-lab/BSD/master/"
73
+
74
+ _URLs = {
75
+ "train": _REPO + "train.json",
76
+ "dev": _REPO + "dev.json",
77
+ "test": _REPO + "test.json",
78
+ }
79
+
80
+
81
+ class BsdJaEn(datasets.GeneratorBasedBuilder):
82
+ """Japanese-English Business Scene Dialogue (BSD) dataset. """
83
+
84
+ VERSION = datasets.Version("1.0.0")
85
+
86
+ def _info(self):
87
+ features = datasets.Features(
88
+ {
89
+ "id": datasets.Value("string"),
90
+ "tag": datasets.Value("string"),
91
+ "title": datasets.Value("string"),
92
+ "original_language": datasets.Value("string"),
93
+ "no": datasets.Value("int32"),
94
+ "en_speaker": datasets.Value("string"),
95
+ "ja_speaker": datasets.Value("string"),
96
+ "en_sentence": datasets.Value("string"),
97
+ "ja_sentence": datasets.Value("string"),
98
+ }
99
+ )
100
+ return datasets.DatasetInfo(
101
+ description=_DESCRIPTION,
102
+ features=features,
103
+ supervised_keys=None,
104
+ homepage=_HOMEPAGE,
105
+ license=_LICENSE,
106
+ citation=_CITATION,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ """Returns SplitGenerators."""
111
+ data_dir = dl_manager.download_and_extract(_URLs)
112
+
113
+ return [
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TRAIN,
116
+ gen_kwargs={
117
+ "filepath": data_dir["train"],
118
+ "split": "train",
119
+ },
120
+ ),
121
+ datasets.SplitGenerator(
122
+ name=datasets.Split.TEST,
123
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
124
+ ),
125
+ datasets.SplitGenerator(
126
+ name=datasets.Split.VALIDATION,
127
+ gen_kwargs={
128
+ "filepath": data_dir["dev"],
129
+ "split": "dev",
130
+ },
131
+ ),
132
+ ]
133
+
134
+ def _generate_examples(self, filepath, split):
135
+ """ Yields examples. """
136
+
137
+ with open(filepath, encoding="utf-8") as f:
138
+ data = json.load(f)
139
+
140
+ for dialogue in data:
141
+ id_ = dialogue["id"]
142
+ tag = dialogue["tag"]
143
+ title = dialogue["title"]
144
+ original_language = dialogue["original_language"]
145
+ conversation = dialogue["conversation"]
146
+
147
+ for turn in conversation:
148
+ sent_no = int(turn["no"])
149
+ en_speaker = turn["en_speaker"]
150
+ ja_speaker = turn["ja_speaker"]
151
+ en_sentence = turn["en_sentence"]
152
+ ja_sentence = turn["ja_sentence"]
153
+
154
+ yield id_, {
155
+ "id": id_,
156
+ "tag": tag,
157
+ "title": title,
158
+ "original_language": original_language,
159
+ "no": sent_no,
160
+ "en_speaker": en_speaker,
161
+ "ja_speaker": ja_speaker,
162
+ "en_sentence": en_sentence,
163
+ "ja_sentence": ja_sentence,
164
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This is the Business Scene Dialogue (BSD) dataset,\na Japanese-English parallel corpus containing written conversations\nin various business scenarios.\n\nThe dataset was constructed in 3 steps:\n 1) selecting business scenes,\n 2) writing monolingual conversation scenarios according to the selected scenes, and\n 3) translating the scenarios into the other language.\n\nHalf of the monolingual scenarios were written in Japanese\nand the other half were written in English.\n\nFields:\n- id: dialogue identifier\n- no: sentence pair number within a dialogue\n- en_speaker: speaker name in English\n- ja_speaker: speaker name in Japanese\n- en_sentence: sentence in English\n- ja_sentence: sentence in Japanese\n- original_language: language in which monolingual scenario was written\n- tag: scenario\n- title: scenario title\n", "citation": "@inproceedings{rikters-etal-2019-designing,\n title = \"Designing the Business Conversation Corpus\",\n author = \"Rikters, Mat\u012bss and\n Ri, Ryokan and\n Li, Tong and\n Nakazawa, Toshiaki\",\n booktitle = \"Proceedings of the 6th Workshop on Asian Translation\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D19-5204\",\n doi = \"10.18653/v1/D19-5204\",\n pages = \"54--61\"\n}\n", "homepage": "https://github.com/tsuruoka-lab/BSD", "license": "CC BY-NC-SA 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tag": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "original_language": {"dtype": "string", "id": null, "_type": "Value"}, "no": {"dtype": "int32", "id": null, "_type": "Value"}, "en_speaker": {"dtype": "string", "id": null, "_type": "Value"}, "ja_speaker": {"dtype": "string", "id": null, "_type": "Value"}, "en_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "ja_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "bsd_ja_en", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4778409, "num_examples": 20000, "dataset_name": "bsd_ja_en"}, "test": {"name": "test", "num_bytes": 493038, "num_examples": 2120, "dataset_name": "bsd_ja_en"}, "validation": {"name": "validation", "num_bytes": 477964, "num_examples": 2051, "dataset_name": "bsd_ja_en"}}, "download_checksums": {"https://raw.githubusercontent.com/tsuruoka-lab/BSD/master/train.json": {"num_bytes": 6740756, "checksum": "e011d1b02ed1acdbed5b677b54a3ceae26baaf6cd2a4fea64e4af4ff699c5ffb"}, "https://raw.githubusercontent.com/tsuruoka-lab/BSD/master/dev.json": {"num_bytes": 687409, "checksum": "43c11e7d9bdb4ab8ecc83114c82a77c596f98c8845af92fb74c8ca12cc9cfa5c"}, "https://raw.githubusercontent.com/tsuruoka-lab/BSD/master/test.json": {"num_bytes": 706880, "checksum": "9da3f8907147b2424671058c93d9f41179ddbd8d0c8298a3e8546c2703174d31"}}, "download_size": 8135045, "post_processing_size": null, "dataset_size": 5749411, "size_in_bytes": 13884456}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03d1da015c68e5c0f0e9bee9939ef361ab9333f036843eeecaa64c56765216a5
3
+ size 25221