Datasets:
task_categories:
- summarization
language:
- hi
- gu
- en
configs:
- config_name: Hindi
data_files:
- split: train
path: Hindi/train.csv
- split: test
path: Hindi/test.csv
- split: validation
path: Hindi/val.csv
default: true
- config_name: Gujarati
data_files:
- split: train
path: Gujarati/train.csv
- split: test
path: Gujarati/test.csv
- split: validation
path: Gujarati/val.csv
- config_name: English
data_files:
- split: train
path: English/train.csv
- split: test
path: English/test.csv
- split: validation
path: English/val.csv
config_names:
- English
- Hindi
- Gujarati
size_categories:
- 1K<n<10K
- 10K<n<100K
Dataset Card for "ILSUM-1.0"
Dataset Summary
Automatic text summarization for Indian languages has received surprisingly little attention from the NLP research community. While large scale datasets exist for a number of languages like English, Chinese, French, German, Spanish, etc. no such datasets exist for any Indian languages. Most existing datasets are either not public, or are too small to be useful. Through this shared task we aim to bridge the existing gap by creating reusable corpora for Indian Language Summarization. In the first edition we cover two major indian languages Hindi and Gujarati, which have over 350 million and over 50 million speakers respectively. Apart from this we also include Indian English, a widely regonized dialect which can be substantially different from English spoken elsewhere.
The dataset for this task is built using articles and headline pairs from several leading newspapers of the country. We provide ~10,000 news articles for each language. The task is to generate a meaningful fixed length summary, either extractive or abstractive, for each article. While several previous works in other languages use news artciles - headlines pair, the current dataset poses a unique challenge of code-mixing and script mixing. It is very common for news articles to borrow phrases from english, even if the article itself is written in an Indian Language.
Examples like these are a common occurence both in the headlines as well as in the articles.
- "IND vs SA, 5મી T20 તસવીરોમાં: વરસાદે વિલન બની મજા બગાડી" (India vs SA, 5th T20 in pictures: rain spoils the match)
- "LIC के IPO में पैसा लगाने वालों का टूटा दिल, आई एक और नुकसानदेह खबर" (Investors of LIC IPO left broken hearted, yet another bad news).
Languages
- Hindi
- Gujarati
- English
Data Fields
- id: Unique id of each datapoint
- Article: Entire News article
- Headline: Headline of News Article
- Summary: Summary of News Article
Data Splits
Data for all three languages is divided into three splits train, validation and test.
Load dataset using hf-dataset class
from datasets import load_dataset
dataset = load_dataset("ILSUM/ILSUM-1.0", "Hindi")
# you can use any of the following config names as a second argument:
# "English", "Hindi", "Gujarati"
Citation Information
If you are using the dataset or the models please cite the following paper
@article{satapara2022findings,
title={Findings of the first shared task on indian language summarization (ilsum): Approaches, challenges and the path ahead},
author={Satapara, Shrey and Modha, Bhavan and Modha, Sandip and Mehta, Parth},
journal={Working Notes of FIRE},
pages={9--13},
year={2022}
}
Contributions
Bhavan Modha, University Of Texas at Dallas, USA
Shrey Satapara, Indian Institute Of Technology, Hyderabad, India
Sandip Modha, LDRP-ITR, Gandhinagar, India
Parth Mehta, Parmonic, USA