Datasets:

Modalities:
Text
Libraries:
Datasets
dibyaaaaax commited on
Commit
6f6d765
1 Parent(s): e54d1e3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific papers. For more details about the dataset please refer the original paper - [https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83](https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83)
4
+ Original source of the data - []()
5
+
6
+
7
+ ## Dataset Structure
8
+
9
+
10
+ ### Data Fields
11
+
12
+ - **id**: unique identifier of the document.
13
+ - **document**: Whitespace separated list of words in the document.
14
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
15
+ - **extractive_keyphrases**: List of all the present keyphrases.
16
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
17
+
18
+
19
+ ### Data Splits
20
+
21
+ |Split| #datapoints |
22
+ |--|--|
23
+ | Test | 2305 |
24
+
25
+
26
+ ## Usage
27
+
28
+ ### Full Dataset
29
+
30
+ ```python
31
+ from datasets import load_dataset
32
+
33
+ # get entire dataset
34
+ dataset = load_dataset("midas/krapivin", "raw")
35
+
36
+ # sample from the test split
37
+ print("Sample from test dataset split")
38
+ test_sample = dataset["test"][0]
39
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
40
+ print("Tokenized Document: ", test_sample["document"])
41
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
42
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
43
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
44
+ print("\n-----------\n")
45
+ ```
46
+ **Output**
47
+
48
+ ```bash
49
+
50
+ ```
51
+
52
+ ### Keyphrase Extraction
53
+ ```python
54
+ from datasets import load_dataset
55
+
56
+ # get the dataset only for keyphrase extraction
57
+ dataset = load_dataset("midas/krapivin", "extraction")
58
+
59
+ print("Samples for Keyphrase Extraction")
60
+
61
+ # sample from the test split
62
+ print("Sample from test data split")
63
+ test_sample = dataset["test"][0]
64
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
65
+ print("Tokenized Document: ", test_sample["document"])
66
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
67
+ print("\n-----------\n")
68
+ ```
69
+
70
+ ### Keyphrase Generation
71
+ ```python
72
+ # get the dataset only for keyphrase generation
73
+ dataset = load_dataset("midas/krapivin", "generation")
74
+
75
+ print("Samples for Keyphrase Generation")
76
+
77
+ # sample from the test split
78
+ print("Sample from test data split")
79
+ test_sample = dataset["test"][0]
80
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
81
+ print("Tokenized Document: ", test_sample["document"])
82
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
83
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
84
+ print("\n-----------\n")
85
+ ```
86
+
87
+ ## Citation Information
88
+ ```
89
+ @inproceedings{Krapivin2009LargeDF,
90
+ title={Large Dataset for Keyphrases Extraction},
91
+ author={Mikalai Krapivin and Aliaksandr Autaeu and Maurizio Marchese},
92
+ year={2009}
93
+ }
94
+
95
+ ```
96
+
97
+ ## Contributions
98
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset