ldkp3k / README.md
dibyaaaaax's picture
Update README.md
e0c986f

A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. For more details about the dataset please refer the original paper - .

Data source -

Dataset Summary

Dataset Structure

Data Fields

  • id: unique identifier of the document.
  • sections: list of all the sections present in the document.
  • sec_text: list of white space separated list of words present in each section.
  • sec_bio_tags: list of BIO tags of white space separated list of words present in each section.
  • extractive_keyphrases: List of all the present keyphrases.
  • abstractive_keyphrase: List of all the absent keyphrases.

Data Splits

Split #datapoints
Train-Small 20,000
Train-Medium 50,000
Train-Large 90,019
Test 3413
Validation 3339

Usage

Small Dataset

from datasets import load_dataset

# get small dataset
dataset = load_dataset("midas/ldkp3k", "small")

def order_sections(sample):
  sections = []
  sec_text = []
  sec_bio_tags = []

  if "title" in sample["sections"]:
    title_idx = sample["sections"].index("title")
    sections.append(sample["sections"].pop(title_idx))
    sec_text.append(sample["sec_text"].pop(title_idx))
    sec_bio_tags.append(sample["sec_bio_tags"].pop(title_idx))

  if "abstract" in sample["sections"]:
    abstract_idx = sample["sections"].index("abstract")
    sections.append(sample["sections"].pop(abstract_idx))
    sec_text.append(sample["sec_text"].pop(abstract_idx))
    sec_bio_tags.append(sample["sec_bio_tags"].pop(abstract_idx))

  sections += sample["sections"]
  sec_text += sample["sec_text"]
  sec_bio_tags += sample["sec_bio_tags"]

  return sections, sec_text, sec_bio_tags

# sample from the train split
print("Sample from train data split")
train_sample = dataset["train"][0]

sections, sec_text, sec_bio_tags = order_sections(train_sample)
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")

# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]

sections, sec_text, sec_bio_tags = order_sections(validation_sample)
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")

# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]

sections, sec_text, sec_bio_tags = order_sections(test_sample)
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")

Output


Medium Dataset

from datasets import load_dataset

# get medium dataset
dataset = load_dataset("midas/ldkp3k", "medium")

Large Dataset

from datasets import load_dataset

# get large dataset
dataset = load_dataset("midas/ldkp3k", "large")

Citation Information

Please cite the works below if you use this dataset in your work.


Contributions

Thanks to @debanjanbhucs, @dibyaaaaax, @UmaGunturi and @ad6398 for adding this dataset