Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
cef2472
1 Parent(s): d1cdea5

Update parquet files

Browse files
README.md DELETED
@@ -1,236 +0,0 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
4
- # Dataset Card for TexPrax
5
-
6
- ## Table of Contents
7
- - [Table of Contents](#table-of-contents)
8
- - [Dataset Description](#dataset-description)
9
- - [Dataset Summary](#dataset-summary)
10
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
- - [Languages](#languages)
12
- - [Dataset Structure](#dataset-structure)
13
- - [Data Instances](#data-instances)
14
- - [Data Fields](#data-fields)
15
- - [Data Splits](#data-splits)
16
- - [Dataset Creation](#dataset-creation)
17
- - [Curation Rationale](#curation-rationale)
18
- - [Source Data](#source-data)
19
- - [Annotations](#annotations)
20
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
21
- - [Considerations for Using the Data](#considerations-for-using-the-data)
22
- - [Social Impact of Dataset](#social-impact-of-dataset)
23
- - [Discussion of Biases](#discussion-of-biases)
24
- - [Other Known Limitations](#other-known-limitations)
25
- - [Additional Information](#additional-information)
26
- - [Dataset Curators](#dataset-curators)
27
- - [Licensing Information](#licensing-information)
28
- - [Citation Information](#citation-information)
29
- - [Contributions](#contributions)
30
-
31
- ## Dataset Description
32
-
33
- - **Homepage: https://texprax.de/**
34
- - **Repository: https://github.com/UKPLab/TexPrax**
35
- - **Paper: https://arxiv.org/abs/2208.07846**
36
- - **Leaderboard: n/a**
37
- - **Point of Contact: Ji-Ung Lee (http://www.ukp.tu-darmstadt.de/)**
38
-
39
- ### Dataset Summary
40
-
41
- This dataset contains dialogues collected from German factory workers at the _Center for industrial productivity_ ([CiP](https://www.prozesslernfabrik.de/)). The dialogues mostly concern issues workers encounter during their daily work, such as machines breaking down, material missing, etc. The dialogues are further expert-annotated on a sentence level (problem, cause, solution, other) for sentence classification and on a token level for named entity recognition using a BIO tagging scheme. Note, that the dataset was collected in three rounds, each around one year apart. Here, we provide the data only split into train and test data where the test data was collected at the last round (July 2022). Additionally, the data from the first round is split into two subdomains, industry 4.0 (industrie) and machining (zerspanung). The splits were made according to the respective groups of people working at different assembly lines in the factory.
42
-
43
- ### Supported Tasks and Leaderboards
44
-
45
- This dataset supports the following tasks:
46
-
47
- * Sentence classification
48
- * Named entity recognition (will be updated soon with the new indexing)
49
- * Dialog generation (so far not evaluated)
50
-
51
- ### Languages
52
-
53
- German
54
-
55
- ## Dataset Structure
56
-
57
- ### Data Instances
58
-
59
- On sentence level, each instance consists of the dialog-id, turn-id, sentence-id, the sentence (raw), the label, the domain, and the subsplit.
60
-
61
- ```
62
- {"185";"562";993";"wie kriege ich die Dichtung raus?";"P";"n/a";"3"}
63
- ```
64
-
65
- On token level, each instance consists of a unique identifier, a list of tokens containing the whole dialog, the list of labels (bio-tagged entities), and the subsplit.
66
- ```
67
- {"178_0";"['Hi', 'wie', 'kriege', 'ich', 'die', 'Dichtung', 'raus', '?', 'in', 'der', 'Schublade', 'gibt', 'es', 'einen', 'Dichtungszieher']";"['O', 'O', 'O', 'O', 'O', 'B-PRE', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'O', 'O', 'B-PE']";"Batch 3"}
68
- ```
69
-
70
- ### Data Fields
71
-
72
- Sentence level:
73
-
74
- * dialog-id: unique identifier for the dialog
75
- * turn-id: unique identifier for the turn
76
- * sentence-id: unique identifier for the dialog
77
- * sentence: the respective sentence
78
- * label: the label (_P_ for Problem, _C_ for Cause, _S_ for solution, and _O_ for Other)
79
- * domain: the subdomains where the data was collected from. Domains are industry, machining, or n/a (for batch 2 and batch 3).
80
- * subsplit: the respective subsplit of the data (see below)
81
-
82
- Token level:
83
-
84
- * id: the identifier
85
- * tokens: a list of tokens (i.e., the tokenized dialogue)
86
- * entities: the named entity in a BIO scheme (_B-X_, _I-X_, or O).
87
- * subsplit: the respective subsplit of the data (see below)
88
-
89
-
90
- ### Data Splits
91
-
92
- The dataset is split into train and test splits, but contains further subsplits (subsplit column). Note, that the splits are collected at different times with some turnaround in the workforce. Hence, later data (especially the data from batch 2) contains more turns (due to increased search for a cause) as more inexperienced workers who newly joined were employed in the factory.
93
-
94
- Train:
95
- * Batch 1 industrie: data collected in October 2020 from workers in the industry 4.0 assembly line
96
- * Batch 1 zerspanung: data collected in October 2020 from workers in the machining assembly line
97
- * Batch 2: data collected in-between October 2021-June 2022 from all workers
98
-
99
- Test:
100
- * Batch 3: data collected in July 2022 together with the system usability study run
101
-
102
- Sentence level statistics:
103
-
104
- | Batch | Dialogues | Turns | Sentences |
105
- |---|---|---|---|
106
- | 1 | 81 | 246 | 553 |
107
- | 2 | 97 | 309 | 432 |
108
- | 3 | 24 | 36 | 42 |
109
- | Overall | 202 | 591 | 1,027 |
110
-
111
- Token level statistics:
112
- [Needs to be added]
113
-
114
- ## Dataset Creation
115
-
116
- ### Curation Rationale
117
-
118
- This dataset provides task-oriented dialogues that solve a very domain specific problem.
119
-
120
- ### Source Data
121
-
122
- #### Initial Data Collection and Normalization
123
-
124
- The data was generated by workers at the [CiP](https://www.prozesslernfabrik.de/). The data was collected in three rounds (October 2020, October 2021-June 2022, July 2022). As the dialogues occurred during their daily work, one distinct property of the dataset is that all dialogues are very informal 'ne', contain abbreviations 'vll', and filler words such as 'ah'. For a detailed description please see the [paper](https://arxiv.org/abs/2208.07846).
125
-
126
- #### Who are the source language producers?
127
-
128
- German factory workers working at the [CiP](https://www.prozesslernfabrik.de/)
129
-
130
- ### Annotations
131
-
132
- #### Annotation process
133
-
134
- **Token level.** Token level annotation was done by researchers who are responsible for supervising and teaching workers at the CiP. The data was first split into three parts, each annotated by one researcher. Next, each researcher cross-examined the other researchers' annotations. If there were disagreements, all three researchers discussed the final label.
135
-
136
- **Sentence level.** Sentence level annotations were collected from the factory workers who also generated the dialogues. For details about the data collection, please see the [TexPrax demo paper](https://arxiv.org/abs/2208.07846).
137
-
138
- #### Who are the annotators?
139
-
140
- **Token level.** Researchers working at the CiP.
141
-
142
- **Sentence level.** The factory workers themselves.
143
-
144
- ### Personal and Sensitive Information
145
-
146
- This dataset is fully anonymized. All occurrences of names have been manually checked during annotation and replaced with a random token.
147
-
148
- ## Considerations for Using the Data
149
-
150
- ### Social Impact of Dataset
151
-
152
- Informal language especially used in short messages, however, seldom considered in existing NLP datasets. This dataset could serve as an interesting evaluation task for transferring language models to low-resource, but highly specific domains. Moreover, we note that despite all abbreviations, typos, and local dialects used in the messages, all workers were able to understand the questions as well as replies. This should be a standard future NLP models should be able to uphold.
153
-
154
- ### Discussion of Biases
155
-
156
- The dialogues are very much on a professional level. The workers were informed (and gave their consent) in advance that their messages are being recorded and processed, which may have influenced them to hold only professional conversations, hence, all dialogues concern inanimate objects (i.e., machines).
157
-
158
- ### Other Known Limitations
159
-
160
- [More Information Needed]
161
-
162
- ## Additional Information
163
-
164
- You can download the data via:
165
-
166
- ```
167
- from datasets import load_dataset
168
-
169
- dataset = load_dataset("UKPLab/TexPrax") # default config is sentence classification
170
- dataset = load_dataset("UKPLab/TexPrax", "ner") # use the ner tag for named entity recognition
171
- ```
172
- Please find more information about the code and how the data was collected on [GitHub](https://github.com/UKPLab/TexPrax).
173
-
174
- ### Dataset Curators
175
-
176
- Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP.
177
-
178
- ### Licensing Information
179
-
180
- [CC-by-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
181
-
182
- ### Citation Information
183
-
184
- Please cite this data using:
185
-
186
- ```
187
- @article{stangier2022texprax,
188
- title={TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation},
189
- author={Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna},
190
- journal={arXiv preprint arXiv:2208.07846},
191
- year={2022}
192
- }
193
- ```
194
-
195
- ### Contributions
196
-
197
- Thanks to [@Wuhn](https://github.com/Wuhn) for adding this dataset.
198
-
199
- ## Tags
200
-
201
- annotations_creators:
202
- - expert-generated
203
-
204
- language:
205
- - de
206
-
207
- language_creators:
208
- - expert-generated
209
-
210
- license:
211
- - cc-by-nc-4.0
212
-
213
- multilinguality:
214
- - monolingual
215
-
216
- pretty_name: TexPrax-Conversations
217
-
218
- size_categories:
219
- - n<1K
220
- - 1K<n<10K
221
-
222
- source_datasets:
223
- - original
224
-
225
- tags:
226
- - dialog
227
- - expert to expert conversations
228
- - task-oriented
229
-
230
- task_categories:
231
- - token-classification
232
- - text-classification
233
-
234
- task_ids:
235
- - named-entity-recognition
236
- - multi-class-classification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
TexPrax.py DELETED
@@ -1,235 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- # TODO: Add description
15
- """TexPrax: Data collected during the project https://texprax.de/ """
16
-
17
-
18
- import csv
19
- import os
20
- import ast
21
- #import json
22
-
23
- import datasets
24
-
25
- # TODO: Add citation
26
- _CITATION = """\
27
- @inproceedings{stangier-etal-2022-texprax,
28
- title = "{T}ex{P}rax: A Messaging Application for Ethical, Real-time Data Collection and Annotation",
29
- author = {Stangier, Lorenz and
30
- Lee, Ji-Ung and
31
- Wang, Yuxi and
32
- M{\"u}ller, Marvin and
33
- Frick, Nicholas and
34
- Metternich, Joachim and
35
- Gurevych, Iryna},
36
- booktitle = "Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: System Demonstrations",
37
- month = nov,
38
- year = "2022",
39
- address = "Taipei, Taiwan",
40
- publisher = "Association for Computational Linguistics",
41
- url = "https://aclanthology.org/2022.aacl-demo.2",
42
- pages = "9--16",
43
- }
44
- """
45
-
46
- # TODO: Add description
47
- _DESCRIPTION = """\
48
- This dataset was collected in the [TexPrax](https://texprax.de/) project and contains named entities annotated by three researchers as well as annotated sentences (problem/P, cause/C, solution/S, and other/O).
49
-
50
- """
51
-
52
- # TODO: Add link
53
- _HOMEPAGE = "https://texprax.de/"
54
-
55
- # TODO: Add license
56
- _LICENSE = "Creative Commons Attribution-NonCommercial 4.0"
57
-
58
-
59
- # TODO: Add tudatalib urls here!
60
- _SENTENCE_URL = "https://tudatalib.ulb.tu-darmstadt.de/bitstream/handle/tudatalib/3534/texprax-sentences.zip?sequence=8&isAllowed=y"
61
- _ENTITY_URL = "https://tudatalib.ulb.tu-darmstadt.de/bitstream/handle/tudatalib/3534/texprax-ner.zip?sequence=9&isAllowed=y"
62
-
63
- class TexPraxConfig(datasets.BuilderConfig):
64
- """BuilderConfig for TexPrax."""
65
- def __init__(self, features, data_url, citation, url, label_classes=("False", "True"), **kwargs):
66
- super(TexPraxConfig, self).__init__(**kwargs)
67
-
68
-
69
- class TexPraxDataset(datasets.GeneratorBasedBuilder):
70
- """German dialgues that ocurred between workers in a factory. This dataset contains token level entity annotation as well as sentence level problem, cause, solution annotations."""
71
-
72
- VERSION = datasets.Version("1.1.0")
73
-
74
- BUILDER_CONFIGS = [
75
- datasets.BuilderConfig(name="sentence_cl", version=VERSION, description="Sentence level annotations of the TexPrax dataset."),
76
- datasets.BuilderConfig(name="ner", version=VERSION, description="BIO-tagged named entites of the TexPrax dataset."),
77
- ]
78
-
79
- DEFAULT_CONFIG_NAME = "sentence_cl" # It's not mandatory to have a default configuration. Just use one if it make sense.
80
-
81
- def _info(self):
82
- if self.config.name == "sentence_cl": # This is the name of the configuration selected in BUILDER_CONFIGS above
83
- features = datasets.Features(
84
- {
85
- # Note: ID consists of <dialog-id_sentence-id_turn-id>
86
- "id": datasets.Value("string"),
87
- "sentence": datasets.Value("string"),
88
- "label": datasets.features.ClassLabel(
89
- names=[
90
- "P",
91
- "C",
92
- "S",
93
- "O",
94
- ]),
95
- "subsplit": datasets.Value("string"),
96
- # These are the features of your dataset like images, labels ...
97
- }
98
- )
99
- else: # This is an example to show how to have different features for "first_domain" and "second_domain"
100
- features = datasets.Features(
101
- {
102
- # Note: ID consists of <dialog-id_turn-id>
103
- "id": datasets.Value("string"),
104
- "tokens": datasets.Sequence(datasets.Value("string")),
105
- "entities": datasets.Sequence(
106
- datasets.features.ClassLabel(
107
- names=[
108
- "B-LOC",
109
- "I-LOC",
110
- "B-ED",
111
- "B-ACT",
112
- "I-ACT",
113
- "B-PRE",
114
- "I-PRE",
115
- "B-AKT",
116
- "I-AKT",
117
- "B-PER",
118
- "I-PER",
119
- "B-A",
120
- "B-G",
121
- "B-I",
122
- "I-I",
123
- "B-OT",
124
- "I-OT",
125
- "B-M",
126
- "I-M",
127
- "B-P",
128
- "I-P",
129
- "B-PR",
130
- "I-PR",
131
- "B-PE",
132
- "I-PE",
133
- "O",
134
- ]
135
- )
136
- ),
137
- "subsplit": datasets.Value("string"),
138
- }
139
- )
140
- return datasets.DatasetInfo(
141
- # This is the description that will appear on the datasets page.
142
- description=_DESCRIPTION,
143
- # This defines the different columns of the dataset and their types
144
- features=features, # Here we define them above because they are different between the two configurations
145
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
146
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
147
- # supervised_keys=("sentence", "label"),
148
- # Homepage of the dataset for documentation
149
- homepage=_HOMEPAGE,
150
- # License for the dataset if available
151
- license=_LICENSE,
152
- # Citation for the dataset
153
- citation=_CITATION,
154
- )
155
-
156
- def _split_generators(self, dl_manager):
157
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
158
-
159
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
160
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
161
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
162
- if self.config.name == "sentence_cl":
163
- urls = _SENTENCE_URL
164
- data_dir = dl_manager.download_and_extract(urls)
165
- return [
166
- datasets.SplitGenerator(
167
- name=datasets.Split.TRAIN,
168
- # These kwargs will be passed to _generate_examples
169
- gen_kwargs={
170
- "filepath": os.path.join(data_dir, "sents_train.csv"),
171
- "split": "train",
172
- },
173
- ),
174
- datasets.SplitGenerator(
175
- name=datasets.Split.TEST,
176
- # These kwargs will be passed to _generate_examples
177
- gen_kwargs={
178
- "filepath": os.path.join(data_dir, "sents_test.csv"),
179
- "split": "test"
180
- },
181
- ),
182
- ]
183
- else:
184
- urls = _ENTITY_URL
185
- data_dir = dl_manager.download_and_extract(urls)
186
- return [
187
- datasets.SplitGenerator(
188
- name=datasets.Split.TRAIN,
189
- # These kwargs will be passed to _generate_examples
190
- gen_kwargs={
191
- "filepath": os.path.join(data_dir, "entities_train.csv"),
192
- "split": "train",
193
- },
194
- ),
195
- datasets.SplitGenerator(
196
- name=datasets.Split.TEST,
197
- # These kwargs will be passed to _generate_examples
198
- gen_kwargs={
199
- "filepath": os.path.join(data_dir, "entities_test.csv"),
200
- "split": "test"
201
- },
202
- )
203
- ]
204
-
205
-
206
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
207
- def _generate_examples(self, filepath, split):
208
- # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
209
- # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
210
- with open(filepath, encoding="utf-8") as f:
211
- creader = csv.reader(f, delimiter=';', quotechar='"')
212
- next(creader) # skip header
213
- for key, row in enumerate(creader):
214
- if self.config.name == "sentence_cl":
215
- dialog_id, turn_id, sentence_id, sentence, label, domain, batch = row
216
- idx = f"{dialog_id}_{turn_id}_{sentence_id}"
217
- yield key, {
218
- "id": idx,
219
- "sentence": sentence,
220
- "label": label,
221
- "subsplit": batch,
222
- #"domain": domain,
223
- }
224
- else:
225
- idx, sentence, labels, split = row
226
- # Yields examples as (key, example) tuples
227
- yield key, {
228
- "id": idx,
229
- "tokens": [t.strip() for t in ast.literal_eval(sentence)],
230
- "entities": [l.strip() for l in ast.literal_eval(labels)],
231
- "subsplit": split,
232
- }
233
-
234
-
235
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ner/tex_prax-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5d088bf86d25be39f75662e4d249cc940787a418148032f527634cebefe5b39
3
+ size 5225
ner/tex_prax-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c598b50c3ab04c215dc2f90faeff430cdd2a68495e128813f0f0f2a85a0e038c
3
+ size 19773
sentence_cl/tex_prax-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:866a08a64f11d305979656309a957ce6907c0e9973d7018e68b6148a583516e4
3
+ size 4356
sentence_cl/tex_prax-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64ac314b4c6a520de5a35cb0ce241cb22e2278f6b384ff58f33c11418110ee5f
3
+ size 45508