system HF staff commited on
Commit
0d23221
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - hi
8
+ licenses:
9
+ - other-MIDAS-LAB-IIITD-Delhi
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - sequence-modeling-other-discourse-analysis
20
+ ---
21
+
22
+ # Dataset Card for Discourse Analysis dataset
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://github.com/midas-research/hindi-discourse
50
+ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.149/
51
+ - **Point of Contact:** https://github.com/midas-research/MeTooMA
52
+
53
+
54
+ ### Dataset Summary
55
+
56
+ - The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences.
57
+ - It contains sentences from stories written by 11 famous authors from the 20th Century.
58
+ - 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories.
59
+ - Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi.
60
+ The corpus contains a total of 10472 sentences belonging to the following categories:
61
+ - Argumentative
62
+ - Descriptive
63
+ - Dialogic
64
+ - Informative
65
+ - Narrative
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ - Discourse Analysis of Hindi.
70
+
71
+ ### Languages
72
+
73
+ Hindi
74
+
75
+ ## Dataset Structure
76
+ - The dataset is structured into JSON format.
77
+
78
+ ### Data Instances
79
+ {'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'}
80
+
81
+ ### Data Fields
82
+
83
+ Sentence number, story number, sentence and discourse mode
84
+
85
+ ### Data Splits
86
+
87
+ - Train: 9983
88
+
89
+ ## Dataset Creation
90
+
91
+ ### Curation Rationale
92
+ - Present a new publicly available corpus
93
+ consisting of sentences from short stories written in a
94
+ low-resource language of Hindi having high quality annotation for five different discourse modes -
95
+ argumentative, narrative, descriptive, dialogic and informative.
96
+
97
+ - Perform a detailed analysis of the proposed annotated corpus and characterize the performance of
98
+ different classification algorithms.
99
+
100
+ ### Source Data
101
+ - Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature.
102
+
103
+ #### Initial Data Collection and Normalization
104
+
105
+ - All the data was collected from various Hindi websites.
106
+ - We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations.
107
+ - We employed three native Hindi speakers with college level education for the an- notation task.
108
+ - We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.
109
+ - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
110
+
111
+ #### Who are the source language producers?
112
+
113
+ Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+
119
+ - The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.
120
+ - The annotators are domain experts having degress in advanced clinical psychology and gender studies.
121
+ - They were provided a guidelines document with instructions about each task and its definitions, labels and examples.
122
+ - They studied the document, worked a few examples to get used to this annotation task.
123
+ - They also provided feedback for improving the class definitions.
124
+ - The annotation process is not mutually exclusive, implying that presence of one label does not mean the
125
+ absence of the other one.
126
+
127
+ #### Who are the annotators?
128
+
129
+ - The annotators were three native Hindi speakers with college level education.
130
+ - Please refer to the accompnaying paper for a detailed annotation process.
131
+
132
+ ### Personal and Sensitive Information
133
+ [More Information Needed]
134
+
135
+ ## Considerations for Using the Data
136
+
137
+ ### Social Impact of Dataset
138
+ - As a future work we would also like to use the presented corpus to see how it could be further used
139
+ in certain downstream tasks such as emotion analysis, machine translation,
140
+ textual entailment, and speech sythesis for improving storytelling experience in Hindi language.
141
+
142
+ ### Discussion of Biases
143
+ [More Information Needed]
144
+
145
+ ### Other Known Limitations
146
+
147
+ - We could not get the best performance using the deep learning model trained on the data, due to
148
+ insufficient data for DL models.
149
+
150
+ ## Additional Information
151
+
152
+ Please refer to this link: https://github.com/midas-research/hindi-discourse
153
+
154
+ ### Dataset Curators
155
+
156
+ - If you use the corpus in a product or application, then please credit the authors
157
+ and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
158
+ (http://midas.iiitd.edu.in) appropriately.
159
+ Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
160
+ - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
161
+ - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
162
+ disclaims any responsibility for the use of the corpus and does not provide technical support.
163
+ However, the contact listed above will be happy to respond to queries and clarifications
164
+ - Please feel free to send us an email:
165
+ - with feedback regarding the corpus.
166
+ - with information on how you have used the corpus.
167
+ - if interested in having us analyze your social media data.
168
+ - if interested in a collaborative research project.
169
+
170
+ ### Licensing Information
171
+
172
+ - If you use the corpus in a product or application, then please credit the authors
173
+ and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
174
+ (http://midas.iiitd.edu.in) appropriately.
175
+
176
+ ### Citation Information
177
+
178
+ Please cite the following publication if you make use of the dataset: https://www.aclweb.org/anthology/2020.lrec-1.149/
179
+
180
+ ```
181
+ @inproceedings{dhanwal-etal-2020-annotated,
182
+ title = "An Annotated Dataset of Discourse Modes in {H}indi Stories",
183
+ author = "Dhanwal, Swapnil and
184
+ Dutta, Hritwik and
185
+ Nankani, Hitesh and
186
+ Shrivastava, Nilay and
187
+ Kumar, Yaman and
188
+ Li, Junyi Jessy and
189
+ Mahata, Debanjan and
190
+ Gosangi, Rakesh and
191
+ Zhang, Haimin and
192
+ Shah, Rajiv Ratn and
193
+ Stent, Amanda",
194
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
195
+ month = may,
196
+ year = "2020",
197
+ address = "Marseille, France",
198
+ publisher = "European Language Resources Association",
199
+ url = "https://www.aclweb.org/anthology/2020.lrec-1.149",
200
+ pages = "1191--1196",
201
+ abstract = "In this paper, we present a new corpus consisting of sentences from Hindi short stories annotated for five different discourse modes argumentative, narrative, descriptive, dialogic and informative. We present a detailed account of the entire data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.87 k-alpha). We analyze the data in terms of label distributions, part of speech tags, and sentence lengths. We characterize the performance of various classification algorithms on this dataset and perform ablation studies to understand the nature of the linguistic models suitable for capturing the nuances of the embedded discourse structures in the presented corpus.",
202
+ language = "English",
203
+ ISBN = "979-10-95546-34-4",
204
+ }
205
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences. \nIt contains sentences from stories written by 11 famous authors from the 20th Century. \n4-5 stories by each author have been selected which were available in the public domain resulting \nin a collection of 53 stories. Most of these short stories were originally written in Hindi \nbut some of them were written in other Indian languages and later translated to Hindi.\n", "citation": "@inproceedings{swapnil2020,\n title={An Annotated Dataset of Discourse Modes in Hindi Stories},\n author={Swapnil Dhanwal, Hritwik Dutta, Hitesh Nankani, Nilay Shrivastava, Yaman Kumar, Junyi Jessy Li, Debanjan Mahata, Rakesh Gosangi, Haimin Zhang, Rajiv Ratn Shah, Amanda Stent},\n booktitle={Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},\n volume={12},\n pages={1191\u20131196},\n year={2020}\n", "homepage": "https://github.com/midas-research/hindi-discourse", "license": "", "features": {"Story_no": {"dtype": "int32", "id": null, "_type": "Value"}, "Sentence": {"dtype": "string", "id": null, "_type": "Value"}, "Discourse Mode": {"num_classes": 6, "names": ["Argumentative", "Descriptive", "Dialogue", "Informative", "Narrative", "Other"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hindi_discourse", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1998930, "num_examples": 9968, "dataset_name": "hindi_discourse"}}, "download_checksums": {"https://raw.githubusercontent.com/midas-research/hindi-discourse/master/discourse_dataset.json": {"num_bytes": 4176677, "checksum": "d27b447e383686213f9936467ec9fbc9e44fa0aebd3f8000865f605a5b3d4ab0"}}, "download_size": 4176677, "post_processing_size": null, "dataset_size": 1998930, "size_in_bytes": 6175607}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfa40355ab25ef76ad011ca77faebed77dd21574c619c139636e627c4bdf8cc2
3
+ size 2386
hindi_discourse.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{swapnil2020,
26
+ title={An Annotated Dataset of Discourse Modes in Hindi Stories},
27
+ author={Swapnil Dhanwal, Hritwik Dutta, Hitesh Nankani, Nilay Shrivastava, Yaman Kumar, Junyi Jessy Li, Debanjan Mahata, Rakesh Gosangi, Haimin Zhang, Rajiv Ratn Shah, Amanda Stent},
28
+ booktitle={Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
29
+ volume={12},
30
+ pages={1191–1196},
31
+ year={2020}
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences.
36
+ It contains sentences from stories written by 11 famous authors from the 20th Century.
37
+ 4-5 stories by each author have been selected which were available in the public domain resulting
38
+ in a collection of 53 stories. Most of these short stories were originally written in Hindi
39
+ but some of them were written in other Indian languages and later translated to Hindi.
40
+ """
41
+
42
+
43
+ _DOWNLOAD_URL = "https://raw.githubusercontent.com/midas-research/hindi-discourse/master/discourse_dataset.json"
44
+
45
+
46
+ class HindiDiscourse(datasets.GeneratorBasedBuilder):
47
+ """Hindi Discourse Dataset - dataset of discourse modes in Hindi stories."""
48
+
49
+ VERSION = datasets.Version("1.0.0")
50
+
51
+ def _info(self):
52
+ # This method pecifies the datasets.DatasetInfo object which contains informations and typings for the dataset
53
+
54
+ return datasets.DatasetInfo(
55
+ # This is the description that will appear on the datasets page.
56
+ description=_DESCRIPTION,
57
+ features=datasets.Features(
58
+ {
59
+ "Story_no": datasets.Value("int32"),
60
+ "Sentence": datasets.Value("string"),
61
+ "Discourse Mode": datasets.ClassLabel(
62
+ names=["Argumentative", "Descriptive", "Dialogue", "Informative", "Narrative", "Other"]
63
+ ),
64
+ }
65
+ ),
66
+ supervised_keys=None,
67
+ # Homepage of the dataset for documentation
68
+ homepage="https://github.com/midas-research/hindi-discourse",
69
+ citation=_CITATION,
70
+ )
71
+
72
+ def _split_generators(self, dl_manager):
73
+ """Returns SplitGenerators."""
74
+
75
+ dataset_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
76
+
77
+ return [
78
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": dataset_path}),
79
+ ]
80
+
81
+ def _generate_examples(self, filepath):
82
+ """ Yields examples. """
83
+ with open(filepath, encoding="utf-8") as f:
84
+ hindiDiscourse = json.load(f)
85
+
86
+ for sentence, rowData in hindiDiscourse.items():
87
+ yield sentence, {
88
+ "Story_no": rowData["Story_no"],
89
+ "Sentence": rowData["Sentence"],
90
+ "Discourse Mode": rowData["Discourse Mode"],
91
+ }