varta / README.md
rahular's picture
v1.0
7151493
metadata
license: cc
task_categories:
  - summarization
  - feature-extraction
language:
  - as
  - bh
  - bn
  - en
  - gu
  - hi
  - kn
  - ml
  - mr
  - ne
  - or
  - pa
  - ta
  - te
  - ur
pretty_name: varta
size_categories:
  - 1B<n<10B

Dataset Description

Dataset Summary

Varta is a diverse, challenging, large-scale, multilingual, and high-quality headline-generation dataset containing 41.8 million news articles in 14 Indic languages and English. The data is crawled from DailyHunt, a popular news aggregator in India that pulls high-quality articles from multiple trusted and reputed news publishers.

Languages

Assamese, Bhojpuri, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Tamil, Telugu, and Urdu.

Dataset Structure

Data Fields

  • id: unique identifier for the artilce on DailyHunt. This id will be used to recreate the dataset.

  • langCode: ISO 639-1 language code

  • source_url: the url that points to the article on the website of the original publisher

  • dh_url: the url that points to the article on DailyHunt

  • id: unique identifier for the artilce on DailyHunt.

  • url: the url that points to the article on DailyHunt

  • headline: headline of the article

  • publication_date: date of publication

  • text: main body of the article

  • tags: main topics related to the article

  • reactions: user likes, dislikes, etc.

  • source_media: original publisher name

  • source_url: the url that points to the article on the website of the original publisher

  • word_count: number of words in the article

  • langCode: language of the article

Data Splits

From every language, we randomly sample 10,000 articles each for validation and testing. We also ensure that at least 80% of a language’s data is available for training. Therefore, if a language has less than 100,000 articles, we restrict its validation and test splits to 10% of its size.

We also create a small training set by limiting the number of articles from each language to 100K. This small training set with a size of 1.3M is used in all our fine-tuning experiments. You can find the small training set here

Data Recreation

To recreate the dataset, follow this README file.

Misc

Citation Information

@misc{aralikatte2023varta,
      title={V\=arta: A Large-Scale Headline-Generation Dataset for Indic Languages}, 
      author={Rahul Aralikatte and Ziling Cheng and Sumanth Doddapaneni and Jackie Chi Kit Cheung},
      year={2023},
      eprint={2305.05858},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}