dibyaaaaax commited on
Commit
72b8f56
1 Parent(s): 6ddcdb0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ A dataset for benchmarking keyphrase extraction and generation techniques from english news articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1859664.1859668](https://dl.acm.org/doi/10.5555/1859664.1859668)
4
+
5
+ Original source of the data - [https://github.com/LIAAD/KeywordExtractor-Datasets/blob/master/datasets/SemEval2010.zip](https://github.com/LIAAD/KeywordExtractor-Datasets/blob/master/datasets/SemEval2010.zip)
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+
11
+ ### Data Fields
12
+
13
+ - **id**: unique identifier of the document.
14
+ - **document**: Whitespace separated list of words in the document.
15
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
16
+ - **extractive_keyphrases**: List of all the present keyphrases.
17
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
18
+
19
+
20
+ ### Data Splits
21
+
22
+ |Split| #datapoints |
23
+ |--|--|
24
+ | Test | 243 |
25
+
26
+
27
+ ## Usage
28
+
29
+ ### Full Dataset
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # get entire dataset
35
+ dataset = load_dataset("midas/semeval2010", "raw")
36
+
37
+ # sample from the train split
38
+ print("Sample from train dataset split")
39
+ test_sample = dataset["train"][0]
40
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
41
+ print("Tokenized Document: ", test_sample["document"])
42
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
43
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
44
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
45
+ print("\n-----------\n")
46
+
47
+ # sample from the test split
48
+ print("Sample from test dataset split")
49
+ test_sample = dataset["test"][0]
50
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
51
+ print("Tokenized Document: ", test_sample["document"])
52
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
53
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
54
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
55
+ print("\n-----------\n")
56
+ ```
57
+ **Output**
58
+
59
+ ```bash
60
+
61
+ ```
62
+
63
+ ### Keyphrase Extraction
64
+ ```python
65
+ from datasets import load_dataset
66
+
67
+ # get the dataset only for keyphrase extraction
68
+ dataset = load_dataset("midas/semeval2010", "extraction")
69
+
70
+ print("Samples for Keyphrase Extraction")
71
+
72
+ # sample from the train split
73
+ print("Sample from train data split")
74
+ test_sample = dataset["train"][0]
75
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
76
+ print("Tokenized Document: ", test_sample["document"])
77
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
78
+ print("\n-----------\n")
79
+
80
+ # sample from the test split
81
+ print("Sample from test data split")
82
+ test_sample = dataset["test"][0]
83
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
84
+ print("Tokenized Document: ", test_sample["document"])
85
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
86
+ print("\n-----------\n")
87
+ ```
88
+
89
+ ### Keyphrase Generation
90
+ ```python
91
+ # get the dataset only for keyphrase generation
92
+ dataset = load_dataset("midas/semeval2010", "generation")
93
+
94
+ print("Samples for Keyphrase Generation")
95
+
96
+ # sample from the train split
97
+ print("Sample from train data split")
98
+ test_sample = dataset["train"][0]
99
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
100
+ print("Tokenized Document: ", test_sample["document"])
101
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
102
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
103
+ print("\n-----------\n")
104
+
105
+ # sample from the test split
106
+ print("Sample from test data split")
107
+ test_sample = dataset["test"][0]
108
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
109
+ print("Tokenized Document: ", test_sample["document"])
110
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
111
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
112
+ print("\n-----------\n")
113
+ ```
114
+
115
+ ## Citation Information
116
+ ```
117
+ @inproceedings{10.5555/1859664.1859668,
118
+ author = {Kim, Su Nam and Medelyan, Olena and Kan, Min-Yen and Baldwin, Timothy},
119
+ title = {SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles},
120
+ year = {2010},
121
+ publisher = {Association for Computational Linguistics},
122
+ address = {USA},
123
+ abstract = {This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given scientific articles. The participating systems were evaluated by matching their extracted keyphrases against manually assigned ones. We present the overall ranking of the submitted systems and discuss our findings to suggest future directions for this task.},
124
+ booktitle = {Proceedings of the 5th International Workshop on Semantic Evaluation},
125
+ pages = {21–26},
126
+ numpages = {6},
127
+ location = {Los Angeles, California},
128
+ series = {SemEval '10}
129
+ }
130
+ ```
131
+
132
+ ## Contributions
133
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset