5roop commited on
Commit
7859bb9
1 Parent(s): c9091db

Add citations and link to Arxiv

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -46,4 +46,8 @@ The model is a result of the [ParlaMint project](https://www.clarin.eu/parlamint
46
 
47
  The first application of this model is the [XLM-R-parlasent model](https://huggingface.co/classla/xlm-r-parlasent), fine-tuned on the [ParlaSent dataset](http://hdl.handle.net/11356/1868) for the task of sentiment analysis in parliamentary proceedings.
48
 
49
- Michal Mochtak, Peter Rupnik, Nikola Ljubešić: The ParlaSent Multilingual Training Dataset for Sentiment Identification in Parliamentary Proceedings.
 
 
 
 
 
46
 
47
  The first application of this model is the [XLM-R-parlasent model](https://huggingface.co/classla/xlm-r-parlasent), fine-tuned on the [ParlaSent dataset](http://hdl.handle.net/11356/1868) for the task of sentiment analysis in parliamentary proceedings.
48
 
49
+ Find more detail about this model in our [paper](https://arxiv.org/abs/2309.09783):
50
+
51
+ ```latex
52
+ @article{Mochtak_Rupnik_Ljubešić_2023, title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings}, rights={All rights reserved}, url={http://arxiv.org/abs/2309.09783}, abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.}, note={arXiv:2309.09783 [cs]}, number={arXiv:2309.09783}, publisher={arXiv}, author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola}, year={2023}, month={Sep}, language={en} }
53
+ ```