dibyaaaaax commited on
Commit
2d472e1
1 Parent(s): 2e34da3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. For more details about the dataset please refer the original paper - []().
2
+
3
+ Data source - []()
4
+
5
+ ## Dataset Summary
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+
11
+ ### Data Fields
12
+
13
+ - **id**: unique identifier of the document.
14
+ - **sections**: list of all the sections present in the document.
15
+ - **sec_text**: list of white space separated list of words present in each section.
16
+ - **sec_bio_tags**: list of BIO tags of white space separated list of words present in each section.
17
+ - **extractive_keyphrases**: List of all the present keyphrases.
18
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
19
+
20
+
21
+ ### Data Splits
22
+
23
+ |Split| #datapoints |
24
+ |--|--|
25
+ | Train-Small | 20,000 |
26
+ | Train-Medium | 50,000 |
27
+ | Train-Large | 1,296,613 |
28
+ | Test | 10,000 |
29
+ | Validation | 10,000 |
30
+
31
+ ## Usage
32
+
33
+ ### Small Dataset
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ # get small dataset
39
+ dataset = load_dataset("midas/ldkp10k", "small")
40
+
41
+ def order_sections(sample):
42
+ """
43
+ corrects the order in which different sections appear in the document.
44
+ resulting order is: title, abstract, other sections in the body
45
+ """
46
+
47
+ sections = []
48
+ sec_text = []
49
+ sec_bio_tags = []
50
+
51
+ if "title" in sample["sections"]:
52
+ title_idx = sample["sections"].index("title")
53
+ sections.append(sample["sections"].pop(title_idx))
54
+ sec_text.append(sample["sec_text"].pop(title_idx))
55
+ sec_bio_tags.append(sample["sec_bio_tags"].pop(title_idx))
56
+
57
+ if "abstract" in sample["sections"]:
58
+ abstract_idx = sample["sections"].index("abstract")
59
+ sections.append(sample["sections"].pop(abstract_idx))
60
+ sec_text.append(sample["sec_text"].pop(abstract_idx))
61
+ sec_bio_tags.append(sample["sec_bio_tags"].pop(abstract_idx))
62
+
63
+ sections += sample["sections"]
64
+ sec_text += sample["sec_text"]
65
+ sec_bio_tags += sample["sec_bio_tags"]
66
+
67
+ return sections, sec_text, sec_bio_tags
68
+
69
+
70
+ # sample from the train split
71
+ print("Sample from train data split")
72
+ train_sample = dataset["train"][0]
73
+
74
+ sections, sec_text, sec_bio_tags = order_sections(train_sample)
75
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
76
+ print("Section names: ", sections)
77
+ print("Tokenized Document: ", sec_text)
78
+ print("Document BIO Tags: ", sec_bio_tags)
79
+ print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
80
+ print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
81
+ print("\n-----------\n")
82
+
83
+ # sample from the validation split
84
+ print("Sample from validation data split")
85
+ validation_sample = dataset["validation"][0]
86
+
87
+ sections, sec_text, sec_bio_tags = order_sections(validation_sample)
88
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
89
+ print("Section names: ", sections)
90
+ print("Tokenized Document: ", sec_text)
91
+ print("Document BIO Tags: ", sec_bio_tags)
92
+ print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
93
+ print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
94
+ print("\n-----------\n")
95
+
96
+ # sample from the test split
97
+ print("Sample from test data split")
98
+ test_sample = dataset["test"][0]
99
+
100
+ sections, sec_text, sec_bio_tags = order_sections(test_sample)
101
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
102
+ print("Section names: ", sections)
103
+ print("Tokenized Document: ", sec_text)
104
+ print("Document BIO Tags: ", sec_bio_tags)
105
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
106
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
107
+ print("\n-----------\n")
108
+
109
+ ```
110
+
111
+ **Output**
112
+ ```bash
113
+
114
+ ```
115
+
116
+ ### Medium Dataset
117
+
118
+ ```python
119
+ from datasets import load_dataset
120
+
121
+ # get medium dataset
122
+ dataset = load_dataset("midas/ldkp10k", "medium")
123
+ ```
124
+
125
+ ### Large Dataset
126
+
127
+ ```python
128
+ from datasets import load_dataset
129
+
130
+ # get large dataset
131
+ dataset = load_dataset("midas/ldkp10k", "large")
132
+ ```
133
+
134
+ ## Citation Information
135
+ Please cite the works below if you use this dataset in your work.
136
+
137
+ ```
138
+ ```
139
+
140
+ ## Contributions
141
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset