Datasets:

Languages:
Tagalog
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
877657c
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - machine-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ languages:
8
+ - tl
9
+ licenses:
10
+ - unknown
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-class-classification
21
+ ---
22
+
23
+ # Dataset Card for Dengue Dataset in Filipino
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage: [Dengue Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)**
51
+ - **Repository: [Dengue Dataset in Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)**
52
+ - **Paper: [IEEE paper](https://ieeexplore.ieee.org/document/8459963)**
53
+ - **Leaderboard:**
54
+ - **Point of Contact: [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)**
55
+
56
+ ### Dataset Summary
57
+
58
+ Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [More Information Needed]
63
+
64
+ ### Languages
65
+
66
+ The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ Sample data:
73
+ ```
74
+ {
75
+ "text": "Tapos ang dami pang lamok.",
76
+ "absent": "0",
77
+ "dengue": "0",
78
+ "health": "0",
79
+ "mosquito": "1",
80
+ "sick": "0"
81
+ }
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ [More Information Needed]
87
+
88
+ ### Data Splits
89
+
90
+ [More Information Needed]
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Curation Rationale
95
+
96
+ [More Information Needed]
97
+
98
+ ### Source Data
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ [More Information Needed]
103
+
104
+ #### Who are the source language producers?
105
+
106
+ [More Information Needed]
107
+
108
+ ### Annotations
109
+
110
+ #### Annotation process
111
+
112
+ [More Information Needed]
113
+
114
+ #### Who are the annotators?
115
+
116
+ [More Information Needed]
117
+
118
+ ### Personal and Sensitive Information
119
+
120
+ [More Information Needed]
121
+
122
+ ## Considerations for Using the Data
123
+
124
+ ### Social Impact of Dataset
125
+
126
+ [More Information Needed]
127
+
128
+ ### Discussion of Biases
129
+
130
+ [More Information Needed]
131
+
132
+ ### Other Known Limitations
133
+
134
+ [More Information Needed]
135
+
136
+ ## Additional Information
137
+
138
+ ### Dataset Curators
139
+
140
+ [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
141
+
142
+ ### Licensing Information
143
+
144
+ [More Information Needed]
145
+
146
+ ### Citation Information
147
+
148
+ @INPROCEEDINGS{8459963,
149
+ author={E. D. {Livelo} and C. {Cheng}},
150
+ booktitle={2018 IEEE International Conference on Agents (ICA)},
151
+ title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
152
+ year={2018},
153
+ volume={},
154
+ number={},
155
+ pages={2-7},
156
+ doi={10.1109/AGENTS.2018.8459963}}
157
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": " Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.\n", "citation": " @INPROCEEDINGS{8459963,\n author={E. D. {Livelo} and C. {Cheng}},\n booktitle={2018 IEEE International Conference on Agents (ICA)},\n title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},\n year={2018},\n volume={},\n number={},\n pages={2-7},\n doi={10.1109/AGENTS.2018.8459963}}\n }\n", "homepage": "https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "absent": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}, "dengue": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}, "health": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}, "mosquito": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sick": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "dengue_filipino", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 428553, "num_examples": 4015, "dataset_name": "dengue_filipino"}, "test": {"name": "test", "num_bytes": 428553, "num_examples": 4015, "dataset_name": "dengue_filipino"}, "validation": {"name": "validation", "num_bytes": 54384, "num_examples": 500, "dataset_name": "dengue_filipino"}}, "download_checksums": {"https://s3.us-east-2.amazonaws.com/blaisecruz.com/datasets/dengue/dengue_raw.zip": {"num_bytes": 156014, "checksum": "928f7072dec6830c2f18aabc490aec886253e76cf764e06395e7ca66c4a17c4c"}}, "download_size": 156014, "post_processing_size": null, "dataset_size": 911490, "size_in_bytes": 1067504}}
dengue_filipino.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Dengue Dataset Low-Resource Multiclass Text Classification Dataset in Filipino"""
16
+
17
+ import csv
18
+ import os
19
+
20
+ import datasets
21
+
22
+
23
+ _DESCRIPTION = """\
24
+ Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
25
+ """
26
+
27
+ _CITATION = """\
28
+ @INPROCEEDINGS{8459963,
29
+ author={E. D. {Livelo} and C. {Cheng}},
30
+ booktitle={2018 IEEE International Conference on Agents (ICA)},
31
+ title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
32
+ year={2018},
33
+ volume={},
34
+ number={},
35
+ pages={2-7},
36
+ doi={10.1109/AGENTS.2018.8459963}}
37
+ }
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks"
41
+
42
+ # TODO: Add the licence for the dataset here if you can find it
43
+ _LICENSE = ""
44
+
45
+ _URL = "https://s3.us-east-2.amazonaws.com/blaisecruz.com/datasets/dengue/dengue_raw.zip"
46
+
47
+
48
+ class DengueFilipino(datasets.GeneratorBasedBuilder):
49
+ """Dengue Dataset Low-Resource Multiclass Text Classification Dataset in Filipino"""
50
+
51
+ VERSION = datasets.Version("1.0.0")
52
+
53
+ def _info(self):
54
+ features = datasets.Features(
55
+ {
56
+ "text": datasets.Value("string"),
57
+ "absent": datasets.features.ClassLabel(names=["0", "1"]),
58
+ "dengue": datasets.features.ClassLabel(names=["0", "1"]),
59
+ "health": datasets.features.ClassLabel(names=["0", "1"]),
60
+ "mosquito": datasets.features.ClassLabel(names=["0", "1"]),
61
+ "sick": datasets.features.ClassLabel(names=["0", "1"]),
62
+ }
63
+ )
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=features,
67
+ supervised_keys=None,
68
+ homepage=_HOMEPAGE,
69
+ license=_LICENSE,
70
+ citation=_CITATION,
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ """Returns SplitGenerators."""
75
+ data_dir = dl_manager.download_and_extract(_URL)
76
+ train_path = os.path.join(data_dir, "dengue", "train.csv")
77
+ test_path = os.path.join(data_dir, "dengue", "train.csv")
78
+ validation_path = os.path.join(data_dir, "dengue", "valid.csv")
79
+
80
+ return [
81
+ datasets.SplitGenerator(
82
+ name=datasets.Split.TRAIN,
83
+ gen_kwargs={
84
+ "filepath": train_path,
85
+ "split": "train",
86
+ },
87
+ ),
88
+ datasets.SplitGenerator(
89
+ name=datasets.Split.TEST,
90
+ gen_kwargs={
91
+ "filepath": test_path,
92
+ "split": "test",
93
+ },
94
+ ),
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.VALIDATION,
97
+ gen_kwargs={
98
+ "filepath": validation_path,
99
+ "split": "dev",
100
+ },
101
+ ),
102
+ ]
103
+
104
+ def _generate_examples(self, filepath, split):
105
+ """ Yields examples. """
106
+ with open(filepath, encoding="utf-8") as csv_file:
107
+ csv_reader = csv.reader(
108
+ csv_file, quotechar='"', delimiter=",", quoting=csv.QUOTE_ALL, skipinitialspace=True
109
+ )
110
+ next(csv_reader)
111
+ for id_, row in enumerate(csv_reader):
112
+ try:
113
+ text, absent, dengue, health, mosquito, sick = row
114
+ payload = {
115
+ "text": text,
116
+ "absent": absent,
117
+ "dengue": dengue,
118
+ "health": health,
119
+ "mosquito": mosquito,
120
+ "sick": sick,
121
+ }
122
+ yield id_, payload
123
+ except ValueError:
124
+ pass
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b0c41624a83d415d484f8a552402b38dbbda81c88dfd752eb2df30618453f6d
3
+ size 1339