system HF staff commited on
Commit
50822a7
0 Parent(s):

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +182 -0
  3. ar_sarcasm.py +110 -0
  4. dataset_infos.json +1 -0
  5. dummy/1.0.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ar
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-semeval_2017
16
+ - extended|other-astd
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - sentiment-classification
21
+ - text-classification-other-sarcasm-detection
22
+ ---
23
+
24
+ # Dataset Card for ArSarcasm
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Repository:** [GitHub](https://github.com/iabufarha/ArSarcasm)
53
+ - **Paper:** https://www.aclweb.org/anthology/2020.osact-1.5/
54
+
55
+ ### Dataset Summary
56
+
57
+ ArSarcasm is a new Arabic sarcasm detection dataset.
58
+ The dataset was created using previously available Arabic sentiment analysis
59
+ datasets ([SemEval 2017](https://www.aclweb.org/anthology/S17-2088.pdf)
60
+ and [ASTD](https://www.aclweb.org/anthology/D15-1299.pdf)) and adds sarcasm and
61
+ dialect labels to them.
62
+
63
+ The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.
64
+
65
+ For more details, please check the paper
66
+ [From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset](https://www.aclweb.org/anthology/2020.osact-1.5/)
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ You can get more information about an Arabic sarcasm tasks and leaderboard
71
+ [here](https://sites.google.com/view/ar-sarcasm-sentiment-detection/).
72
+
73
+ ### Languages
74
+
75
+ Arabic (multiple dialects)
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ ```javascript
82
+ {'dialect': 1, 'original_sentiment': 0, 'sarcasm': 0, 'sentiment': 0, 'source': 'semeval', 'tweet': 'نصيحه ما عمرك اتنزل لعبة سوبر ماريو مش زي ما كنّا متوقعين الله يرحم ايامات السيقا والفاميلي #SuperMarioRun'}
83
+ ```
84
+
85
+ ### Data Fields
86
+
87
+ - tweet: the original tweet text
88
+ - sarcasm: 0 for non-sarcastic, 1 for sarcastic
89
+ - sentiment: 0 for negative, 1 for neutral, 2 for positive
90
+ - original_sentiment: 0 for negative, 1 for neutral, 2 for positive
91
+ - source: the original source of tweet: SemEval or ASTD
92
+ - dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA)
93
+
94
+ ### Data Splits
95
+
96
+ The training set contains 8,437 tweets, while the test set contains 2,110 tweets.
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ [More Information Needed]
103
+
104
+ ### Source Data
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them.
109
+
110
+ #### Who are the source language producers?
111
+
112
+ SemEval 2017 and ASTD
113
+
114
+ ### Annotations
115
+
116
+ #### Annotation process
117
+
118
+ For the annotation process, we used Figure-Eight
119
+ crowdsourcing platform. Our main objective was to annotate the
120
+ data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for
121
+ sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the
122
+ annotators were asked to provide three labels for each tweet
123
+ as the following:
124
+
125
+ - Sarcasm: sarcastic or non-sarcastic.
126
+ - Sentiment: positive, negative or neutral.
127
+ - Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA).
128
+
129
+ #### Who are the annotators?
130
+
131
+ Figure-Eight crowdsourcing platform
132
+
133
+ ### Personal and Sensitive Information
134
+
135
+ [More Information Needed]
136
+
137
+ ## Considerations for Using the Data
138
+
139
+ ### Social Impact of Dataset
140
+
141
+ [More Information Needed]
142
+
143
+ ### Discussion of Biases
144
+
145
+ [More Information Needed]
146
+
147
+ ### Other Known Limitations
148
+
149
+ [More Information Needed]
150
+
151
+ ## Additional Information
152
+
153
+ ### Dataset Curators
154
+
155
+ - Ibrahim Abu-Farha
156
+ - Walid Magdy
157
+
158
+ ### Licensing Information
159
+
160
+ MIT
161
+
162
+ ### Citation Information
163
+
164
+ ```
165
+ @inproceedings{abu-farha-magdy-2020-arabic,
166
+ title = "From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset",
167
+ author = "Abu Farha, Ibrahim and Magdy, Walid",
168
+ booktitle = "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
169
+ month = may,
170
+ year = "2020",
171
+ address = "Marseille, France",
172
+ publisher = "European Language Resource Association",
173
+ url = "https://www.aclweb.org/anthology/2020.osact-1.5",
174
+ pages = "32--39",
175
+ language = "English",
176
+ ISBN = "979-10-95546-51-1",
177
+ }
178
+ ```
179
+
180
+ ### Contributions
181
+
182
+ Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
ar_sarcasm.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import csv
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ # no BibTeX citation
25
+ _CITATION = """@inproceedings{abu-farha-magdy-2020-arabic,
26
+ title = "From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset",
27
+ author = "Abu Farha, Ibrahim and Magdy, Walid",
28
+ booktitle = "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
29
+ month = may,
30
+ year = "2020",
31
+ address = "Marseille, France",
32
+ publisher = "European Language Resource Association",
33
+ url = "https://www.aclweb.org/anthology/2020.osact-1.5",
34
+ pages = "32--39",
35
+ language = "English",
36
+ ISBN = "979-10-95546-51-1",
37
+ }"""
38
+
39
+ _DESCRIPTION = """\
40
+ ArSarcasm is a new Arabic sarcasm detection dataset.
41
+ The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD)
42
+ and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.
43
+ """
44
+
45
+ _LICENSE = "MIT"
46
+
47
+ _URLs = {
48
+ "default": "https://github.com/iabufarha/ArSarcasm/archive/master.zip",
49
+ }
50
+
51
+
52
+ class ArSarcasm(datasets.GeneratorBasedBuilder):
53
+ VERSION = datasets.Version("1.0.0")
54
+
55
+ def _info(self):
56
+ features = datasets.Features(
57
+ {
58
+ "dialect": datasets.ClassLabel(names=["egypt", "gulf", "levant", "magreb", "msa"]),
59
+ "sarcasm": datasets.ClassLabel(names=["non-sarcastic", "sarcastic"]),
60
+ "sentiment": datasets.ClassLabel(names=["negative", "neutral", "positive"]),
61
+ "original_sentiment": datasets.ClassLabel(names=["negative", "neutral", "positive"]),
62
+ "tweet": datasets.Value("string"),
63
+ "source": datasets.Value("string"),
64
+ }
65
+ )
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=features,
69
+ supervised_keys=None,
70
+ homepage="https://github.com/iabufarha/ArSarcasm",
71
+ license=_LICENSE,
72
+ citation=_CITATION,
73
+ )
74
+
75
+ def _split_generators(self, dl_manager):
76
+ my_urls = _URLs[self.config.name]
77
+ data_dir = dl_manager.download_and_extract(my_urls)
78
+
79
+ return [
80
+ datasets.SplitGenerator(
81
+ name=datasets.Split.TRAIN,
82
+ gen_kwargs={
83
+ "filepath": os.path.join(data_dir, "ArSarcasm-master", "dataset", "ArSarcasm_train.csv"),
84
+ },
85
+ ),
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.TEST,
88
+ gen_kwargs={
89
+ "filepath": os.path.join(data_dir, "ArSarcasm-master", "dataset", "ArSarcasm_test.csv"),
90
+ },
91
+ ),
92
+ ]
93
+
94
+ def _generate_examples(self, filepath):
95
+ with open(filepath, encoding="utf-8") as f:
96
+ rdr = csv.reader(f, delimiter=",")
97
+ next(rdr)
98
+ for id_, row in enumerate(rdr):
99
+ if len(row) < 6:
100
+ continue
101
+ if row[4][0] == '"' and row[4][-1] == '"':
102
+ row[4] = row[4][1:-1]
103
+ yield id_, {
104
+ "dialect": row[0],
105
+ "sarcasm": "sarcastic" if row[1] == "True" else "non-sarcastic",
106
+ "sentiment": row[2],
107
+ "original_sentiment": row[3],
108
+ "tweet": row[4],
109
+ "source": row[5],
110
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "ArSarcasm is a new Arabic sarcasm detection dataset.\nThe dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD)\n and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.\n", "citation": "@inproceedings{abu-farha-magdy-2020-arabic,\n title = \"From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset\",\n author = \"Abu Farha, Ibrahim and Magdy, Walid\",\n booktitle = \"Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection\",\n month = may,\n year = \"2020\",\n address = \"Marseille, France\",\n publisher = \"European Language Resource Association\",\n url = \"https://www.aclweb.org/anthology/2020.osact-1.5\",\n pages = \"32--39\",\n language = \"English\",\n ISBN = \"979-10-95546-51-1\",\n}", "homepage": "https://github.com/iabufarha/ArSarcasm", "license": "MIT", "features": {"dialect": {"num_classes": 5, "names": ["egypt", "gulf", "levant", "magreb", "msa"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sarcasm": {"num_classes": 2, "names": ["non-sarcastic", "sarcastic"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sentiment": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "original_sentiment": {"num_classes": 3, "names": ["negative", "neutral", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "tweet": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ar_sarcasm", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1829167, "num_examples": 8437, "dataset_name": "ar_sarcasm"}, "test": {"name": "test", "num_bytes": 458218, "num_examples": 2110, "dataset_name": "ar_sarcasm"}}, "download_checksums": {"https://github.com/iabufarha/ArSarcasm/archive/master.zip": {"num_bytes": 750717, "checksum": "a148877c4c933827d83b6e679880eeccf58b751c5ed5785f2f5a93aa950d0d41"}}, "download_size": 750717, "post_processing_size": null, "dataset_size": 2287385, "size_in_bytes": 3038102}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd29c761c21ffb7714b262bf238514ea741aa19b154edb54843735994b4fe983
3
+ size 1903