alemiaschi commited on
Commit
6d6eb3f
1 Parent(s): f39ef1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -8
README.md CHANGED
@@ -14,17 +14,29 @@ widget:
14
  </p>
15
 
16
 
17
- This model is released as part of the paper ["Linguistic Knowledge Can Enhance Encoder-Decoder Models (*If You Let It*)"](https://arxiv.org/pdf/2402.17608.pdf) (Miaschi et al., 2024).
18
  If you use this model in your work, we kindly ask you to cite our paper:
19
 
20
  ```bibtex
21
- @article{miaschi2024linguistic,
22
- title={Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It)},
23
- author={Alessio Miaschi and Felice Dell'Orletta and Giulia Venturi},
24
- year={2024},
25
- eprint={2402.17608},
26
- archivePrefix={arXiv},
27
- primaryClass={cs.CL}
 
 
 
 
 
 
 
 
 
 
 
 
28
  }
29
  ```
30
 
 
14
  </p>
15
 
16
 
17
+ This model is released as part of the paper ["Linguistic Knowledge Can Enhance Encoder-Decoder Models (*If You Let It*)"](https://aclanthology.org/2024.lrec-main.922.pdf) (Miaschi et al., 2024).
18
  If you use this model in your work, we kindly ask you to cite our paper:
19
 
20
  ```bibtex
21
+ @inproceedings{miaschi-etal-2024-linguistic-knowledge,
22
+ title = "Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It)",
23
+ author = "Miaschi, Alessio and
24
+ Dell{'}Orletta, Felice and
25
+ Venturi, Giulia",
26
+ editor = "Calzolari, Nicoletta and
27
+ Kan, Min-Yen and
28
+ Hoste, Veronique and
29
+ Lenci, Alessandro and
30
+ Sakti, Sakriani and
31
+ Xue, Nianwen",
32
+ booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
33
+ month = may,
34
+ year = "2024",
35
+ address = "Torino, Italy",
36
+ publisher = "ELRA and ICCL",
37
+ url = "https://aclanthology.org/2024.lrec-main.922",
38
+ pages = "10539--10554",
39
+ abstract = "In this paper, we explore the impact of augmenting pre-trained Encoder-Decoder models, specifically T5, with linguistic knowledge for the prediction of a target task. In particular, we investigate whether fine-tuning a T5 model on an intermediate task that predicts structural linguistic properties of sentences modifies its performance in the target task of predicting sentence-level complexity. Our study encompasses diverse experiments conducted on Italian and English datasets, employing both monolingual and multilingual T5 models at various sizes. Results obtained for both languages and in cross-lingual configurations show that linguistically motivated intermediate fine-tuning has generally a positive impact on target task performance, especially when applied to smaller models and in scenarios with limited data availability.",
40
  }
41
  ```
42