system HF staff commited on
Commit
8bff759
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +155 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. telugu_news.py +126 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - other
6
+ languages:
7
+ - te
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ - text-classification
19
+ task_ids:
20
+ - language-modeling
21
+ - multi-class-classification
22
+ - topic-classification
23
+ ---
24
+
25
+ # Dataset Card for [Dataset Name]
26
+
27
+ ## Table of Contents
28
+ - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
42
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
43
+ - [Annotations](#annotations)
44
+ - [Annotation process](#annotation-process)
45
+ - [Who are the annotators?](#who-are-the-annotators)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Discussion of Biases](#discussion-of-biases)
50
+ - [Other Known Limitations](#other-known-limitations)
51
+ - [Additional Information](#additional-information)
52
+ - [Dataset Curators](#dataset-curators)
53
+ - [Licensing Information](#licensing-information)
54
+ - [Citation Information](#citation-information)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage: https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
59
+ - **Repository: https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
60
+
61
+
62
+ ### Dataset Summary
63
+
64
+ This dataset contains Telugu language news articles along with respective topic
65
+ labels (business, editorial, entertainment, nation, sport) extracted from the daily Andhra Jyoti.
66
+ This dataset could be used to build Classification and Language Models.
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ Multiclass classification, Topic Classification, Language Model
71
+
72
+ ### Languages
73
+
74
+ TE - Telugu, India
75
+
76
+ ## Dataset Structure
77
+
78
+
79
+ ### Data Instances
80
+
81
+ Two CSV files (train, test) with five columns (sno, date, heading, body, topic).
82
+
83
+ ### Data Fields
84
+
85
+ - sno: id
86
+ - date: publish date of the news article
87
+ - heading: article heading/title
88
+ - body: article body/content
89
+ - topic: one of the following topics (business, editorial, entertainment, nation, sport)
90
+
91
+ ### Data Splits
92
+
93
+ Train and Test
94
+
95
+ ## Dataset Creation
96
+
97
+ ### Curation Rationale
98
+
99
+ [More Information Needed]
100
+
101
+ ### Source Data
102
+
103
+ - https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
104
+ - https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ The source data is scraped articles from archives of Telugu newspaper website Andhra Jyoti.
109
+ A set of queries were created and the corresponding ground truth answers were retrieved by a combination of BM25 and tf-idf.
110
+
111
+ #### Who are the source language producers?
112
+
113
+ [More Information Needed]
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+
119
+ [More Information Needed]
120
+
121
+ #### Who are the annotators?
122
+
123
+ [More Information Needed]
124
+
125
+ ### Personal and Sensitive Information
126
+
127
+ [More Information Needed]
128
+
129
+ ## Considerations for Using the Data
130
+
131
+ ### Social Impact of Dataset
132
+
133
+ [More Information Needed]
134
+
135
+ ### Discussion of Biases
136
+
137
+ [More Information Needed]
138
+
139
+ ### Other Known Limitations
140
+
141
+ [More Information Needed]
142
+
143
+ ## Additional Information
144
+
145
+ ### Dataset Curators
146
+
147
+ Sudalai Rajkumar, Anusha Motamarri
148
+
149
+ ### Licensing Information
150
+
151
+ [More Information Needed]
152
+
153
+ ### Citation Information
154
+
155
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This dataset contains Telugu language news articles along with respective\ntopic labels (business, editorial, entertainment, nation, sport) extracted from\nthe daily Andhra Jyoti. This dataset could be used to build Classification and Language Models.\n", "citation": "@InProceedings{kaggle:dataset,\ntitle = {Telugu News - Natural Language Processing for Indian Languages},\nauthors={Sudalai Rajkumar, Anusha Motamarri},\nyear={2019}\n}\n", "homepage": "https://www.kaggle.com/sudalairajkumar/telugu-nlp", "license": "Data files \u00a9 Original Authors", "features": {"sno": {"dtype": "int32", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "heading": {"dtype": "string", "id": null, "_type": "Value"}, "body": {"dtype": "string", "id": null, "_type": "Value"}, "topic": {"num_classes": 5, "names": ["business", "editorial", "entertainment", "nation", "sports"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "telugu_news", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 69400234, "num_examples": 17312, "dataset_name": "telugu_news"}, "test": {"name": "test", "num_bytes": 17265514, "num_examples": 4329, "dataset_name": "telugu_news"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 86665748, "size_in_bytes": 86665748}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e1b332352c7cc6c4c5f185de198510825889009aecb5e87e6daf3c5dc8fffe7
3
+ size 5754
telugu_news.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @InProceedings{kaggle:dataset,
27
+ title = {Telugu News - Natural Language Processing for Indian Languages},
28
+ authors={Sudalai Rajkumar, Anusha Motamarri},
29
+ year={2019}
30
+ }
31
+ """
32
+
33
+ # You can copy an official description
34
+ _DESCRIPTION = """\
35
+ This dataset contains Telugu language news articles along with respective
36
+ topic labels (business, editorial, entertainment, nation, sport) extracted from
37
+ the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models.
38
+ """
39
+
40
+ _HOMEPAGE = "https://www.kaggle.com/sudalairajkumar/telugu-nlp"
41
+
42
+ _LICENSE = "Data files © Original Authors"
43
+
44
+
45
+ class TeluguNews(datasets.GeneratorBasedBuilder):
46
+ """Telugu News Articles with Topics."""
47
+
48
+ VERSION = datasets.Version("1.1.0")
49
+
50
+ @property
51
+ def manual_download_instructions(self):
52
+ return """\
53
+ You need to visit Kaggle @ https://www.kaggle.com/sudalairajkumar/telugu-nlp,
54
+ and manually download the `telugu_news` dataset. This will download a file called
55
+ `telugu_news.zip` to your laptop. Unzip the file and move the two CSV files
56
+ (train and test) files to <path/to/folder>. You can then use
57
+ `datasets.load_dataset("telugu_news", data_dir="<path/to/folder>")` to load the datset.
58
+ """
59
+
60
+ def _info(self):
61
+ features = datasets.Features(
62
+ {
63
+ "sno": datasets.Value("int32"),
64
+ "date": datasets.Value("string"),
65
+ "heading": datasets.Value("string"),
66
+ "body": datasets.Value("string"),
67
+ "topic": datasets.features.ClassLabel(
68
+ names=["business", "editorial", "entertainment", "nation", "sports"],
69
+ ),
70
+ }
71
+ )
72
+
73
+ return datasets.DatasetInfo(
74
+ description=_DESCRIPTION,
75
+ features=features,
76
+ supervised_keys=None,
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ """Returns SplitGenerators."""
84
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
85
+
86
+ if not os.path.exists(data_dir):
87
+ raise FileNotFoundError(
88
+ "{} does not exist. Download instructions: {} ".format(
89
+ data_dir,
90
+ self.manual_download_instructions,
91
+ )
92
+ )
93
+
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ gen_kwargs={
98
+ "filepath": os.path.join(data_dir, "train_telugu_news.csv"),
99
+ "split": "train",
100
+ },
101
+ ),
102
+ datasets.SplitGenerator(
103
+ name=datasets.Split.TEST,
104
+ gen_kwargs={
105
+ "filepath": os.path.join(data_dir, "test_telugu_news.csv"),
106
+ "split": "test",
107
+ },
108
+ ),
109
+ ]
110
+
111
+ def _generate_examples(self, filepath, split):
112
+ """ Yields examples. """
113
+
114
+ with open(filepath, encoding="utf-8") as csv_file:
115
+ csv_reader = csv.reader(csv_file)
116
+ next(csv_reader, None) # skip the headers
117
+
118
+ for id_, row in enumerate(csv_reader):
119
+ sno, date, heading, body, topic = row
120
+ yield id_, {
121
+ "sno": sno,
122
+ "date": date,
123
+ "heading": heading,
124
+ "body": body,
125
+ "topic": topic,
126
+ }