Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
unknown
Annotations Creators:
unknown
Tags:
License:
boudinfl commited on
Commit
49c1926
1 Parent(s): 36dd77a

Adding title/abstract + PRMU categories

Browse files
Files changed (4) hide show
  1. README.md +11 -2
  2. prmu.py +100 -0
  3. test.jsonl +2 -2
  4. train.jsonl +2 -2
README.md CHANGED
@@ -42,6 +42,9 @@ This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016]
42
  * `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique.
43
  We keep the title and abstract and select the most content bearing sentences from the remaining contents.
44
 
 
 
 
45
  Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition).
46
  They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
47
  Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
@@ -50,13 +53,15 @@ Details about the process can be found in `prmu.py`.
50
 
51
  ## Content and statistics
52
 
53
- The dataset is divided into the following three splits:
54
 
55
  | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
56
  | :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
57
  | Train | 144 | - | - | - | - | - | - |
58
  | Test | 100 | - | - | - | - | - | - |
59
 
 
 
60
  The following data fields are available :
61
 
62
  - **id**: unique identifier of the document.
@@ -71,9 +76,12 @@ The following data fields are available :
71
 
72
  ## References
73
 
74
- - (Kim et al., 2010). Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
75
  [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
76
  In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
 
 
 
77
  - (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
78
  [How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
79
  In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
@@ -82,5 +90,6 @@ The following data fields are available :
82
  In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
83
 
84
  [kim-2010]: https://aclanthology.org/S10-1004/
 
85
  [boudin-2016]: https://aclanthology.org/W16-3917/
86
  [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
42
  * `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique.
43
  We keep the title and abstract and select the most content bearing sentences from the remaining contents.
44
 
45
+ Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided.
46
+ Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014].
47
+
48
  Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition).
49
  They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
50
  Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
53
 
54
  ## Content and statistics
55
 
56
+ The dataset is divided into the following two splits:
57
 
58
  | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
59
  | :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
60
  | Train | 144 | - | - | - | - | - | - |
61
  | Test | 100 | - | - | - | - | - | - |
62
 
63
+ Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers.
64
+
65
  The following data fields are available :
66
 
67
  - **id**: unique identifier of the document.
76
 
77
  ## References
78
 
79
+ - (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
80
  [SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
81
  In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
82
+ - (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014.
83
+ [Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014].
84
+ In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA).
85
  - (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
86
  [How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
87
  In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
90
  In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
91
 
92
  [kim-2010]: https://aclanthology.org/S10-1004/
93
+ [chaimongkol-2014]: https://aclanthology.org/L14-1259/
94
  [boudin-2016]: https://aclanthology.org/W16-3917/
95
  [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
prmu.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+
3
+ import sys
4
+ import json
5
+ import spacy
6
+
7
+ from nltk.stem.snowball import SnowballStemmer as Stemmer
8
+
9
+ nlp = spacy.load("en_core_web_sm")
10
+
11
+ # https://spacy.io/usage/linguistic-features#native-tokenizer-additions
12
+
13
+ from spacy.lang.char_classes import ALPHA, ALPHA_LOWER, ALPHA_UPPER
14
+ from spacy.lang.char_classes import CONCAT_QUOTES, LIST_ELLIPSES, LIST_ICONS
15
+ from spacy.util import compile_infix_regex
16
+
17
+ # Modify tokenizer infix patterns
18
+ infixes = (
19
+ LIST_ELLIPSES
20
+ + LIST_ICONS
21
+ + [
22
+ r"(?<=[0-9])[+\-\*^](?=[0-9-])",
23
+ r"(?<=[{al}{q}])\.(?=[{au}{q}])".format(
24
+ al=ALPHA_LOWER, au=ALPHA_UPPER, q=CONCAT_QUOTES
25
+ ),
26
+ r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
27
+ # ✅ Commented out regex that splits on hyphens between letters:
28
+ # r"(?<=[{a}])(?:{h})(?=[{a}])".format(a=ALPHA, h=HYPHENS),
29
+ r"(?<=[{a}0-9])[:<>=/](?=[{a}])".format(a=ALPHA),
30
+ ]
31
+ )
32
+
33
+ infix_re = compile_infix_regex(infixes)
34
+ nlp.tokenizer.infix_finditer = infix_re.finditer
35
+
36
+
37
+ def contains(subseq, inseq):
38
+ return any(inseq[pos:pos + len(subseq)] == subseq for pos in range(0, len(inseq) - len(subseq) + 1))
39
+
40
+
41
+ def find_pmru(tok_title, tok_text, tok_kp):
42
+ """Find PRMU category of a given keyphrase."""
43
+
44
+ # if kp is present
45
+ if contains(tok_kp, tok_title) or contains(tok_kp, tok_text):
46
+ return "P"
47
+
48
+ # if kp is considered as absent
49
+ else:
50
+
51
+ # find present and absent words
52
+ present_words = [w for w in tok_kp if w in tok_title or w in tok_text]
53
+
54
+ # if "all" words are present
55
+ if len(present_words) == len(tok_kp):
56
+ return "R"
57
+ # if "some" words are present
58
+ elif len(present_words) > 0:
59
+ return "M"
60
+ # if "no" words are present
61
+ else:
62
+ return "U"
63
+
64
+
65
+ if __name__ == '__main__':
66
+
67
+ data = []
68
+
69
+ # read the dataset
70
+ with open(sys.argv[1], 'r') as f:
71
+ # loop through the documents
72
+ for line in f:
73
+ doc = json.loads(line.strip())
74
+
75
+ print(doc['id'])
76
+
77
+ title_spacy = nlp(doc['title'])
78
+ abstract_spacy = nlp(doc['abstract'])
79
+
80
+ title_tokens = [token.text for token in title_spacy]
81
+ abstract_tokens = [token.text for token in abstract_spacy]
82
+
83
+ title_stems = [Stemmer('porter').stem(w.lower()) for w in title_tokens]
84
+ abstract_stems = [Stemmer('porter').stem(w.lower()) for w in abstract_tokens]
85
+
86
+ keyphrases_stems = []
87
+ for keyphrase in doc['keyphrases']:
88
+ keyphrases_stems.append(keyphrase.split())
89
+
90
+ prmu = [find_pmru(title_stems, abstract_stems, kp) for kp in keyphrases_stems]
91
+
92
+ if doc['prmu'] != prmu:
93
+ print("PRMU categories are not identical!")
94
+
95
+ doc['prmu'] = prmu
96
+ data.append(json.dumps(doc))
97
+
98
+ # write the json
99
+ with open(sys.argv[2], 'w') as o:
100
+ o.write("\n".join(data))
test.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5fda3666bcdbd4d6dd8b586864487132933734e0ed6e6aff6c758928b41ac46d
3
- size 11110282
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2d985cd322c7ed8019d9a88ff3cbd92652bac155cb94fe20883e9ef235150ac
3
+ size 11239360
train.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff5128cfe97a99cbc52b70d905cf731930f33072e6ebc4f7740b56e80818c8e9
3
- size 15885424
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45f7e772b653b3e7764857665129af800d190cf2e8801ee12fc60731b40404e6
3
+ size 16057856