laura.vasquezrodriguez commited on
Commit
f7fa1d6
1 Parent(s): a1bc52e

Update paper link in README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -3,7 +3,7 @@ license: cc-by-4.0
3
  ---
4
 
5
 
6
- ## Prompt-based learning for Lexical Simplification: prompt-ls-pt-1
7
 
8
  We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
9
  This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
@@ -31,7 +31,7 @@ For the zero-shot setting, we used the original models with no further training.
31
  ## Results
32
 
33
  We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
34
- You can find more details in our [paper](https://drive.google.com/file/d/10nOMKuM62khIfRea8-XHdG6jsyMXsZtP/view?usp=share_link).
35
 
36
  | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
37
  |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
@@ -49,11 +49,11 @@ You can find more details in our [paper](https://drive.google.com/file/d/10nOMKu
49
  ## Citation
50
 
51
  If you use our results and scripts in your research, please cite our work:
52
- "[UoM&MMU at TSAR-2022 Shared Task — PromptLS: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/10nOMKuM62khIfRea8-XHdG6jsyMXsZtP/view?usp=share_link)".
53
 
54
  ```
55
  @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
56
- title = "UoM\&MMU at TSAR-2022 Shared Task — PromptLS: Prompt Learning for Lexical Simplification",
57
  author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
58
  Nguyen, Nhung T. H. and
59
  Shardlow, Matthew and
@@ -62,4 +62,4 @@ If you use our results and scripts in your research, please cite our work:
62
  month = dec,
63
  year = "2022",
64
  }
65
- ```
 
3
  ---
4
 
5
 
6
+ ## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-pt-1
7
 
8
  We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
9
  This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
 
31
  ## Results
32
 
33
  We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
34
+ You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link).
35
 
36
  | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
37
  |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
 
49
  ## Citation
50
 
51
  If you use our results and scripts in your research, please cite our work:
52
+ "[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)".
53
 
54
  ```
55
  @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
56
+ title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification",
57
  author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
58
  Nguyen, Nhung T. H. and
59
  Shardlow, Matthew and
 
62
  month = dec,
63
  year = "2022",
64
  }
65
+ ```