Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
License:
File size: 1,412 Bytes
9fb65f3 11038ce 7eef536 11038ce 9fb65f3 a62f0eb 11038ce 9fb65f3 a62f0eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
language:
- en
multilinguality:
- monolingual
license: other
license_name: topicnet
license_link: >-
https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/LICENSE.txt
configs:
- config_name: "bag-of-words"
default: true
data_files:
- split: train
path: "data/Reuters_BOW.csv.gz"
- config_name: "natural-order-of-words"
data_files:
- split: train
path: "data/Reuters_NOOW.csv.gz"
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-class-classification
- multi-label-classification
tags:
- topic-modeling
- topic-modelling
- text-clustering
- multimodal-data
- multimodal-learning
- modalities
- document-representation
---
# Reuters
The Reuters Corpus contains 10,788 news documents totaling 1.3 million words. The documents have been classified into 90 topics, and grouped into two sets, called "training" and "test"; thus, the text with fileid 'test/14826' is a document drawn from the test set. This split is for training and testing algorithms that automatically detect the topic of a document, as we will see in chap-data-intensive.
* Language: English
* Number of topics: 90
* Number of articles: ~10.000
* Year: 2000
## References
* NLTK datasets: https://www.nltk.org/book/ch02.html.
* Dataset site: https://trec.nist.gov/data/reuters/reuters.html.
|