system HF staff commited on
Commit
ce7724a
1 Parent(s): a0c54a3

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +174 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - topic-classification
20
+ ---
21
+
22
+ # Dataset Card for "ag_news"
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits Sample Size](#data-splits-sample-size)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+ - [Contributions](#contributions)
47
+
48
+ ## [Dataset Description](#dataset-description)
49
+
50
+ - **Homepage:** [http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
51
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+ - **Size of downloaded dataset files:** 29.88 MB
55
+ - **Size of the generated dataset:** 30.23 MB
56
+ - **Total amount of disk used:** 60.10 MB
57
+
58
+ ### [Dataset Summary](#dataset-summary)
59
+
60
+ AG is a collection of more than 1 million news articles. News articles have been
61
+ gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
62
+ activity. ComeToMyHead is an academic news search engine which has been running
63
+ since July, 2004. The dataset is provided by the academic comunity for research
64
+ purposes in data mining (clustering, classification, etc), information retrieval
65
+ (ranking, search, etc), xml, data compression, data streaming, and any other
66
+ non-commercial activity. For more information, please refer to the link
67
+ http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
68
+
69
+ The AG's news topic classification dataset is constructed by Xiang Zhang
70
+ (xiang.zhang@nyu.edu) from the dataset above. It is used as a text
71
+ classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
72
+ LeCun. Character-level Convolutional Networks for Text Classification. Advances
73
+ in Neural Information Processing Systems 28 (NIPS 2015).
74
+
75
+ ### [Supported Tasks](#supported-tasks)
76
+
77
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
78
+
79
+ ### [Languages](#languages)
80
+
81
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
+
83
+ ## [Dataset Structure](#dataset-structure)
84
+
85
+ We show detailed information for up to 5 configurations of the dataset.
86
+
87
+ ### [Data Instances](#data-instances)
88
+
89
+ #### default
90
+
91
+ - **Size of downloaded dataset files:** 29.88 MB
92
+ - **Size of the generated dataset:** 30.23 MB
93
+ - **Total amount of disk used:** 60.10 MB
94
+
95
+ An example of 'train' looks as follows.
96
+ ```
97
+ {
98
+ "label": 3,
99
+ "text": "New iPad released Just like every other September, this one is no different. Apple is planning to release a bigger, heavier, fatter iPad that..."
100
+ }
101
+ ```
102
+
103
+ ### [Data Fields](#data-fields)
104
+
105
+ The data fields are the same among all splits.
106
+
107
+ #### default
108
+ - `text`: a `string` feature.
109
+ - `label`: a classification label, with possible values including `World` (0), `Sports` (1), `Business` (2), `Sci/Tech` (3).
110
+
111
+ ### [Data Splits Sample Size](#data-splits-sample-size)
112
+
113
+ | name |train |test|
114
+ |-------|-----:|---:|
115
+ |default|120000|7600|
116
+
117
+ ## [Dataset Creation](#dataset-creation)
118
+
119
+ ### [Curation Rationale](#curation-rationale)
120
+
121
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
+
123
+ ### [Source Data](#source-data)
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### [Annotations](#annotations)
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
136
+
137
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ### [Discussion of Biases](#discussion-of-biases)
142
+
143
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
144
+
145
+ ### [Other Known Limitations](#other-known-limitations)
146
+
147
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
148
+
149
+ ## [Additional Information](#additional-information)
150
+
151
+ ### [Dataset Curators](#dataset-curators)
152
+
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+
155
+ ### [Licensing Information](#licensing-information)
156
+
157
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
158
+
159
+ ### [Citation Information](#citation-information)
160
+
161
+ ```
162
+ @inproceedings{Zhang2015CharacterlevelCN,
163
+ title={Character-level Convolutional Networks for Text Classification},
164
+ author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
165
+ booktitle={NIPS},
166
+ year={2015}
167
+ }
168
+
169
+ ```
170
+
171
+
172
+ ### Contributions
173
+
174
+ Thanks to [@jxmorris12](https://github.com/jxmorris12), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun) for adding this dataset.