danielpreotiuc commited on
Commit
8d34eeb
1 Parent(s): b07d250

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -106,6 +106,7 @@ Output: language model;keyphrase generation;new pre-training objective;pre-train
106
 
107
  Please cite this work using the following BibTeX entry:
108
 
 
109
  @inproceedings{kulkarni-etal-2022-learning,
110
  title = "Learning Rich Representation of Keyphrases from Text",
111
  author = "Kulkarni, Mayank and
@@ -122,5 +123,6 @@ Please cite this work using the following BibTeX entry:
122
  pages = "891--906",
123
  abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.",
124
  }
 
125
 
126
  Please direct all questions to dpreotiucpie@bloomberg.net
 
106
 
107
  Please cite this work using the following BibTeX entry:
108
 
109
+ ```
110
  @inproceedings{kulkarni-etal-2022-learning,
111
  title = "Learning Rich Representation of Keyphrases from Text",
112
  author = "Kulkarni, Mayank and
 
123
  pages = "891--906",
124
  abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.",
125
  }
126
+ ```
127
 
128
  Please direct all questions to dpreotiucpie@bloomberg.net