system HF staff commited on
Commit
a150770
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - machine-generated
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - de
9
+ licenses:
10
+ - cc-by-sa-3-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - n<1K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - structure-prediction
19
+ - text-scoring
20
+ task_ids:
21
+ - sentiment-scoring
22
+ - structure-prediction-other-pos-tagging
23
+ ---
24
+
25
+ # Dataset Card for SentiWS
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://wortschatz.uni-leipzig.de/en/download
53
+ - **Repository:** [Needs More Information]
54
+ - **Paper:** http://www.lrec-conf.org/proceedings/lrec2010/pdf/490_Paper.pdf
55
+ - **Leaderboard:** [Needs More Information]
56
+ - **Point of Contact:** [Needs More Information]
57
+
58
+ ### Dataset Summary
59
+
60
+ SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative polarity bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS contains around 1,650 positive and 1,800 negative words, which sum up to around 16,000 positive and 18,000 negative word forms incl. their inflections, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ Sentiment-Scoring, Pos-Tagging
65
+
66
+ ### Languages
67
+
68
+ German
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+ For pos-tagging:
74
+ ```
75
+ {
76
+ "word":"Abbau"
77
+ "pos_tag": 0
78
+ }
79
+ ```
80
+ For sentiment-scoring:
81
+ ```
82
+ {
83
+ "word":"Abbau"
84
+ "sentiment-score":-0.058
85
+ }
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ SentiWS is UTF8-encoded text.
91
+ For pos-tagging:
92
+ - word: one word as a string,
93
+ - pos_tag: the part-of-speech tag of the word as an integer,
94
+ For sentiment-scoring:
95
+ - word: one word as a string,
96
+ - sentiment-score: the sentiment score of the word as a float between -1 and 1,
97
+
98
+ The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
99
+
100
+ ### Data Splits
101
+
102
+ train: 1,650 negative and 1,818 positive words
103
+
104
+ ## Dataset Creation
105
+
106
+ ### Curation Rationale
107
+
108
+ [Needs More Information]
109
+
110
+ ### Source Data
111
+
112
+ #### Initial Data Collection and Normalization
113
+
114
+ [Needs More Information]
115
+
116
+ #### Who are the source language producers?
117
+
118
+ [Needs More Information]
119
+
120
+ ### Annotations
121
+
122
+ #### Annotation process
123
+
124
+ [Needs More Information]
125
+
126
+ #### Who are the annotators?
127
+
128
+ [Needs More Information]
129
+
130
+ ### Personal and Sensitive Information
131
+
132
+ [Needs More Information]
133
+
134
+ ## Considerations for Using the Data
135
+
136
+ ### Social Impact of Dataset
137
+
138
+ [Needs More Information]
139
+
140
+ ### Discussion of Biases
141
+
142
+ [Needs More Information]
143
+
144
+ ### Other Known Limitations
145
+
146
+ [Needs More Information]
147
+
148
+ ## Additional Information
149
+
150
+ ### Dataset Curators
151
+
152
+ [Needs More Information]
153
+
154
+ ### Licensing Information
155
+
156
+ Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
157
+
158
+ ### Citation Information
159
+ @INPROCEEDINGS{remquahey2010,
160
+ title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis},
161
+ booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)},
162
+ author = {Remus, R. and Quasthoff, U. and Heyer, G.},
163
+ year = {2010}
164
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"pos-tagging": {"description": "SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are [\"NN\", \"VVINF\", \"ADJX\", \"ADV\"] -> [\"noun\", \"verb\", \"adjective\", \"adverb\"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].\n", "citation": "@INPROCEEDINGS{remquahey2010,\ntitle = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis},\nbooktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)},\nauthor = {Remus, R. and Quasthoff, U. and Heyer, G.},\nyear = {2010}\n}\n", "homepage": "", "license": "Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License", "features": {"word": {"dtype": "string", "id": null, "_type": "Value"}, "pos-tag": {"num_classes": 4, "names": ["NN", "VVINF", "ADJX", "ADV"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "senti_ws", "config_name": "pos-tagging", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 75530, "num_examples": 3471, "dataset_name": "senti_ws"}}, "download_checksums": {"https://pcai056.informatik.uni-leipzig.de/downloads/etc/SentiWS/SentiWS_v2.0.zip": {"num_bytes": 97748, "checksum": "4dd6ce99a44b5122c04fe0e7ca36db1f94d738201095edabb0a16cc98f160b91"}}, "download_size": 97748, "post_processing_size": null, "dataset_size": 75530, "size_in_bytes": 173278}, "sentiment-scoring": {"description": "SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are [\"NN\", \"VVINF\", \"ADJX\", \"ADV\"] -> [\"noun\", \"verb\", \"adjective\", \"adverb\"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].\n", "citation": "@INPROCEEDINGS{remquahey2010,\ntitle = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis},\nbooktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)},\nauthor = {Remus, R. and Quasthoff, U. and Heyer, G.},\nyear = {2010}\n}\n", "homepage": "", "license": "Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License", "features": {"word": {"dtype": "string", "id": null, "_type": "Value"}, "sentiment-score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "senti_ws", "config_name": "sentiment-scoring", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 61646, "num_examples": 3471, "dataset_name": "senti_ws"}}, "download_checksums": {"https://pcai056.informatik.uni-leipzig.de/downloads/etc/SentiWS/SentiWS_v2.0.zip": {"num_bytes": 97748, "checksum": "4dd6ce99a44b5122c04fe0e7ca36db1f94d738201095edabb0a16cc98f160b91"}}, "download_size": 97748, "post_processing_size": null, "dataset_size": 61646, "size_in_bytes": 159394}}
dummy/pos-tagging/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe1cc35c819b01db658839eba4dd3a552bc400e099a4749cbc0eddd0a74ca817
3
+ size 1345
dummy/sentiment-scoring/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2931c34cd7d1228d3c32f8f3bb7ec58786ab00c61786b3189baf49044759df6
3
+ size 1345
senti_ws.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """SentiWS: German-language resource for sentiment analysis, pos-tagging"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @INPROCEEDINGS{remquahey2010,
26
+ title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis},
27
+ booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)},
28
+ author = {Remus, R. and Quasthoff, U. and Heyer, G.},
29
+ year = {2010}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
35
+ """
36
+
37
+ _HOMEPAGE = ""
38
+
39
+ _LICENSE = "Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License"
40
+
41
+ _URLs = ["https://pcai056.informatik.uni-leipzig.de/downloads/etc/SentiWS/SentiWS_v2.0.zip"]
42
+
43
+
44
+ class SentiWS(datasets.GeneratorBasedBuilder):
45
+ """SentiWS: German-language resource for sentiment analysis, pos-tagging"""
46
+
47
+ VERSION = datasets.Version("1.1.0")
48
+
49
+ BUILDER_CONFIGS = [
50
+ datasets.BuilderConfig(name="pos-tagging", version=VERSION, description="This covers pos-tagging task"),
51
+ datasets.BuilderConfig(
52
+ name="sentiment-scoring",
53
+ version=VERSION,
54
+ description="This covers the sentiment-scoring in [-1, 1] corresponding to (negative, positive) sentiment",
55
+ ),
56
+ ]
57
+
58
+ DEFAULT_CONFIG_NAME = "pos-tagging"
59
+
60
+ def _info(self):
61
+
62
+ if (
63
+ self.config.name == "pos-tagging"
64
+ ): # the pos-tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"]
65
+ features = datasets.Features(
66
+ {
67
+ "word": datasets.Value("string"),
68
+ "pos-tag": datasets.ClassLabel(names=["NN", "VVINF", "ADJX", "ADV"]),
69
+ }
70
+ )
71
+ else: # This is an example to show how to have different features for "first_domain" and "second_domain"
72
+ features = datasets.Features(
73
+ {
74
+ "word": datasets.Value("string"),
75
+ "sentiment-score": datasets.Value("float32"),
76
+ }
77
+ )
78
+ return datasets.DatasetInfo(
79
+ # This is the description that will appear on the datasets page.
80
+ description=_DESCRIPTION,
81
+ # This defines the different columns of the dataset and their types
82
+ features=features, # Here we define them above because they are different between the two configurations
83
+ # If there's a common (input, target) tuple from the features,
84
+ # specify them here. They'll be used if as_supervised=True in
85
+ # builder.as_dataset.
86
+ supervised_keys=None,
87
+ # Homepage of the dataset for documentation
88
+ homepage=_HOMEPAGE,
89
+ # License for the dataset if available
90
+ license=_LICENSE,
91
+ # Citation for the dataset
92
+ citation=_CITATION,
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+ """Returns SplitGenerators."""
97
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
98
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
99
+
100
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
101
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
102
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
103
+ my_urls = _URLs
104
+ data_dir = dl_manager.download_and_extract(my_urls)
105
+ return [
106
+ datasets.SplitGenerator(
107
+ name=datasets.Split.TRAIN,
108
+ # These kwargs will be passed to _generate_examples
109
+ gen_kwargs={
110
+ "sourcefiles": [
111
+ os.path.join(data_dir[0], f)
112
+ for f in ["SentiWS_v2.0_Positive.txt", "SentiWS_v2.0_Negative.txt"]
113
+ ],
114
+ "split": "train",
115
+ },
116
+ ),
117
+ ]
118
+
119
+ def _generate_examples(self, sourcefiles, split):
120
+ """ Yields examples. """
121
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
122
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
123
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
124
+ for filepath in sourcefiles:
125
+ with open(filepath, encoding="utf-8") as f:
126
+ for id_, row in enumerate(f):
127
+ word = row.split("|")[0]
128
+ if self.config.name == "pos-tagging":
129
+ tag = row.split("|")[1].split("\t")[0]
130
+ yield id_, {"word": word, "pos-tag": tag}
131
+ else:
132
+ sentiscore = row.split("|")[1].split("\t")[1]
133
+ yield id_, {"word": word, "sentiment-score": float(sentiscore)}