Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
dibyaaaaax commited on
Commit
d392c84
1 Parent(s): 35f7c44

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ A dataset for benchmarking keyphrase extraction and generation techniques from english news articles. For more details about the dataset please refer the original paper - [https://arxiv.org/abs/1704.02853](https://arxiv.org/abs/1704.02853)
4
+
5
+ Original source of the data - [https://github.com/LIAAD/KeywordExtractor-Datasets/blob/master/datasets/SemEval2017.zip](https://github.com/LIAAD/KeywordExtractor-Datasets/blob/master/datasets/SemEval2017.zip)
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+
11
+ ### Data Fields
12
+
13
+ - **id**: unique identifier of the document.
14
+ - **document**: Whitespace separated list of words in the document.
15
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
16
+ - **extractive_keyphrases**: List of all the present keyphrases.
17
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
18
+
19
+
20
+ ### Data Splits
21
+
22
+ |Split| #datapoints |
23
+ |--|--|
24
+ | Test | 493 |
25
+
26
+
27
+ ## Usage
28
+
29
+ ### Full Dataset
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # get entire dataset
35
+ dataset = load_dataset("midas/semeval2017", "raw")
36
+
37
+ # sample from the train split
38
+ print("Sample from train dataset split")
39
+ test_sample = dataset["train"][0]
40
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
41
+ print("Tokenized Document: ", test_sample["document"])
42
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
43
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
44
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
45
+ print("\n-----------\n")
46
+
47
+ # sample from the test split
48
+ print("Sample from test dataset split")
49
+ test_sample = dataset["test"][0]
50
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
51
+ print("Tokenized Document: ", test_sample["document"])
52
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
53
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
54
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
55
+ print("\n-----------\n")
56
+ ```
57
+ **Output**
58
+
59
+ ```bash
60
+
61
+ ```
62
+
63
+ ### Keyphrase Extraction
64
+ ```python
65
+ from datasets import load_dataset
66
+
67
+ # get the dataset only for keyphrase extraction
68
+ dataset = load_dataset("midas/semeval2017", "extraction")
69
+
70
+ print("Samples for Keyphrase Extraction")
71
+
72
+ # sample from the train split
73
+ print("Sample from train data split")
74
+ test_sample = dataset["train"][0]
75
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
76
+ print("Tokenized Document: ", test_sample["document"])
77
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
78
+ print("\n-----------\n")
79
+
80
+ # sample from the test split
81
+ print("Sample from test data split")
82
+ test_sample = dataset["test"][0]
83
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
84
+ print("Tokenized Document: ", test_sample["document"])
85
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
86
+ print("\n-----------\n")
87
+ ```
88
+
89
+ ### Keyphrase Generation
90
+ ```python
91
+ # get the dataset only for keyphrase generation
92
+ dataset = load_dataset("midas/semeval2017", "generation")
93
+
94
+ print("Samples for Keyphrase Generation")
95
+
96
+ # sample from the train split
97
+ print("Sample from train data split")
98
+ test_sample = dataset["train"][0]
99
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
100
+ print("Tokenized Document: ", test_sample["document"])
101
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
102
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
103
+ print("\n-----------\n")
104
+
105
+ # sample from the test split
106
+ print("Sample from test data split")
107
+ test_sample = dataset["test"][0]
108
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
109
+ print("Tokenized Document: ", test_sample["document"])
110
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
111
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
112
+ print("\n-----------\n")
113
+ ```
114
+
115
+ ## Citation Information
116
+ ```
117
+ @article{DBLP:journals/corr/AugensteinDRVM17,
118
+ author = {Isabelle Augenstein and
119
+ Mrinal Das and
120
+ Sebastian Riedel and
121
+ Lakshmi Vikraman and
122
+ Andrew McCallum},
123
+ title = {SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations
124
+ from Scientific Publications},
125
+ journal = {CoRR},
126
+ volume = {abs/1704.02853},
127
+ year = {2017},
128
+ url = {http://arxiv.org/abs/1704.02853},
129
+ eprinttype = {arXiv},
130
+ eprint = {1704.02853},
131
+ timestamp = {Mon, 13 Aug 2018 16:46:36 +0200},
132
+ biburl = {https://dblp.org/rec/journals/corr/AugensteinDRVM17.bib},
133
+ bibsource = {dblp computer science bibliography, https://dblp.org}
134
+ }
135
+ ```
136
+
137
+ ## Contributions
138
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset