reordering keyphrases with apparition order in text
Browse files- README.md +3 -2
- test.jsonl +2 -2
- train.jsonl +2 -2
README.md
CHANGED
@@ -35,7 +35,7 @@ This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016]
|
|
35 |
* `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library.
|
36 |
We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505.
|
37 |
We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion.
|
38 |
-
We finally apply a systematic dehyphenation at line breaks.
|
39 |
|
40 |
* `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.
|
41 |
|
@@ -50,6 +50,7 @@ They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</
|
|
50 |
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
|
51 |
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
|
52 |
Details about the process can be found in `prmu.py`.
|
|
|
53 |
|
54 |
## Content and statistics
|
55 |
|
@@ -92,4 +93,4 @@ The following data fields are available :
|
|
92 |
[kim-2010]: https://aclanthology.org/S10-1004/
|
93 |
[chaimongkol-2014]: https://aclanthology.org/L14-1259/
|
94 |
[boudin-2016]: https://aclanthology.org/W16-3917/
|
95 |
-
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
|
|
|
35 |
* `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library.
|
36 |
We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505.
|
37 |
We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion.
|
38 |
+
We finally apply a systematic dehyphenation at line breaks.s
|
39 |
|
40 |
* `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.
|
41 |
|
|
|
50 |
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
|
51 |
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
|
52 |
Details about the process can be found in `prmu.py`.
|
53 |
+
The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1).
|
54 |
|
55 |
## Content and statistics
|
56 |
|
|
|
93 |
[kim-2010]: https://aclanthology.org/S10-1004/
|
94 |
[chaimongkol-2014]: https://aclanthology.org/L14-1259/
|
95 |
[boudin-2016]: https://aclanthology.org/W16-3917/
|
96 |
+
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
|
test.jsonl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:615ebea82047b901dbb0f6bda32278c41ed99f7a8d824ef8e371e234229fc52c
|
3 |
+
size 11238079
|
train.jsonl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c72af40191d12c41e410200eadf28756f36a791c4345771dd2d9362b8241c4c4
|
3 |
+
size 16055534
|