Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
License:

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for "newsgroup"

Dataset Summary

The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.

does not include cross-posts and includes only the "From" and "Subject" headers.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

18828_alt.atheism

  • Size of downloaded dataset files: 14.67 MB
  • Size of the generated dataset: 1.67 MB
  • Total amount of disk used: 16.34 MB

An example of 'train' looks as follows.


18828_comp.graphics

  • Size of downloaded dataset files: 14.67 MB
  • Size of the generated dataset: 1.66 MB
  • Total amount of disk used: 16.33 MB

An example of 'train' looks as follows.


18828_comp.os.ms-windows.misc

  • Size of downloaded dataset files: 14.67 MB
  • Size of the generated dataset: 2.38 MB
  • Total amount of disk used: 17.05 MB

An example of 'train' looks as follows.


18828_comp.sys.ibm.pc.hardware

  • Size of downloaded dataset files: 14.67 MB
  • Size of the generated dataset: 1.18 MB
  • Total amount of disk used: 15.85 MB

An example of 'train' looks as follows.


18828_comp.sys.mac.hardware

  • Size of downloaded dataset files: 14.67 MB
  • Size of the generated dataset: 1.06 MB
  • Total amount of disk used: 15.73 MB

An example of 'train' looks as follows.


Data Fields

The data fields are the same among all splits.

18828_alt.atheism

  • text: a string feature.

18828_comp.graphics

  • text: a string feature.

18828_comp.os.ms-windows.misc

  • text: a string feature.

18828_comp.sys.ibm.pc.hardware

  • text: a string feature.

18828_comp.sys.mac.hardware

  • text: a string feature.

Data Splits

name train
18828_alt.atheism 799
18828_comp.graphics 973
18828_comp.os.ms-windows.misc 985
18828_comp.sys.ibm.pc.hardware 982
18828_comp.sys.mac.hardware 961

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@incollection{LANG1995331,
title = {NewsWeeder: Learning to Filter Netnews},
editor = {Armand Prieditis and Stuart Russell},
booktitle = {Machine Learning Proceedings 1995},
publisher = {Morgan Kaufmann},
address = {San Francisco (CA)},
pages = {331-339},
year = {1995},
isbn = {978-1-55860-377-6},
doi = {https://doi.org/10.1016/B978-1-55860-377-6.50048-7},
url = {https://www.sciencedirect.com/science/article/pii/B9781558603776500487},
author = {Ken Lang},
}

Contributions

Thanks to @mariamabarham, @thomwolf, @lhoestq for adding this dataset.

Downloads last month
0

Models trained or fine-tuned on google-research-datasets/newsgroup