|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:13:47.592755Z" |
|
}, |
|
"title": "Multilingual and Multilabel Emotion Recognition using Virtual Adversarial Training", |
|
"authors": [ |
|
{ |
|
"first": "Vikram", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "vikramgupta@sharechat.co" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Virtual Adversarial Training (VAT) has been effective in learning robust models under supervised and semi-supervised settings for both computer vision and NLP tasks. However, the efficacy of VAT for multilingual and multilabel text classification has not been explored before. In this work, we explore VAT for multilabel emotion recognition with a focus on leveraging unlabelled data from different languages to improve the model performance. We perform extensive semi-supervised experiments on Se-mEval2018 multilabel and multilingual emotion recognition dataset and show performance gains of 6.2% (Arabic), 3.8% (Spanish) and 1.8% (English) over supervised learning with same amount of labelled data (10% of training data). We also improve the existing state-ofthe-art by 7%, 4.5% and 1% (Jaccard Index) for Spanish, Arabic and English respectively and perform probing experiments for understanding the impact of different layers of the contextual models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Virtual Adversarial Training (VAT) has been effective in learning robust models under supervised and semi-supervised settings for both computer vision and NLP tasks. However, the efficacy of VAT for multilingual and multilabel text classification has not been explored before. In this work, we explore VAT for multilabel emotion recognition with a focus on leveraging unlabelled data from different languages to improve the model performance. We perform extensive semi-supervised experiments on Se-mEval2018 multilabel and multilingual emotion recognition dataset and show performance gains of 6.2% (Arabic), 3.8% (Spanish) and 1.8% (English) over supervised learning with same amount of labelled data (10% of training data). We also improve the existing state-ofthe-art by 7%, 4.5% and 1% (Jaccard Index) for Spanish, Arabic and English respectively and perform probing experiments for understanding the impact of different layers of the contextual models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Emotion recognition is an active and crucial area of research, especially for social media platforms. Understanding the emotional state of the users from textual data forms an important problem as it helps in discovering signs of fear, anxiety, bullying, hatred etc. and maintaining the emotional health of the people and platform. With the advent of deep neural networks and contextual models, text understanding has advanced dramatically by leveraging huge amount of unlabelled data freely available on web. However, even with these advancements, annotating emotion categories is expensive and time consuming as emotion categories are highly correlated and subjective in nature and can co-occur in the same text. Psychological studies suggest that emotions like \"anger\" and \"sadness\" are corelated and co-occur more frequently than \"anger\" and \"happiness\" (Plutchik, 1980) . In a multilingual setup, the annotation becomes even more challenging as annotator team are expected to be familiar with different languages and culture for understanding the emotions accurately. Imbalance in availability of the data across languages further creates problems, especially in case of resource impoverished languages. In this work, we investigate the following key points; a) Can unlabelled data from other languages improve recognition performance of target language and help in reducing requirement of labelled data? b) Efficacy of VAT for multilingual and multilabel setup.", |
|
"cite_spans": [ |
|
{ |
|
"start": 858, |
|
"end": 874, |
|
"text": "(Plutchik, 1980)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To address the aforementioned questions, we focus our experiments towards semi-supervised learning in a multilingual and multilabel emotion classification framework. We formulate semi-supervised Virtual Adversarial Training (VAT) (Miyato et al., 2018) for multilabel emotion classification using contextual models and perform extensive experiments to demonstrate that unlabelled data from other languages L ul = {L 1 , L 2 , . . . , L n } improves the classification on the target language L tgt . We obtain competitive performance by reducing the amount of annotated data demonstrating crosslanguage learning. To effectively leverage the multilingual content, we use multilingual contextual models for representing the text across languages. We also evaluate monolingual contextual models to understand the performance differences between multilingual and monolingual models and explore the advantages of domain-adaptive and task-adaptive pretraining of models for our task and observe substantial gains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 251, |
|
"text": "(Miyato et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We perform extensive experiments on the SemEval2018 (Affect in Tweets: Task E-c 1 ) dataset (Mohammad et al., 2018) which contains tweets from Twitter annotated with 11 emotion categories across three languages -English, Spanish and Arabic and demonstrate the effectiveness of semi-supervised learning across languages. To the best of our knowledge, our study is the first one to explore semi-supervised adversarial learning across different languages for multilabel classification. In summary, the main contributions of our work are the following:", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 115, |
|
"text": "(Mohammad et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We explore Virtual Adversarial Training (VAT) for semi-supervised multilabel classification on multilingual corpus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Experiments demonstrating 6.2%, 3.8% and 1.8% improvements (Jaccard Index) on Arabic, Spanish and English by leveraging unlabelled data of other languages while using 10% of labelled samples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Improve state-of-the-art of multilabel emotion recognition by 7%, 4.5% and 1% (Jaccard Index) for Spanish, Arabic and English respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Experiments showcasing the advantages of domain-adaptive and task-adaptive pretraining", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Semi-supervised learning is an important paradigm for tackling the scarcity of labelled data as it marries the advantages of supervised and unsupervised learning by leveraging the information hidden in large amount of unlabelled data along with small amount of labelled data (Yang et al., 2021) , (Van Engelen and Hoos, 2020) . Early approaches used self-training for leveraging the model's own predictions on unlabelled data to obtain additional information during training (Yarowsky, 1995 ) (McClosky et al., 2006 . Clark et al. (2018) proposed cross-view training (CVT) for various tasks like chunking, dependency parsing, machine translation and reported state-of-theart results. CVT forces the model to make consistent predictions when using the full input or partial input. Ladder networks (Laine and Aila, 2016) , Mean Teacher networks (Tarvainen and Valpola, 2017) are another way for semi-supervised learning where temporal and model-weights are ensembled. Another popular direction towards semisupervised learning is adversarial training where the data point is perturbed with random or carefully tuned perturbations to create an adversarial sample. The model is then encouraged to maintain consistent predictions for the original sample and the adversarial sample. Adversarial training was initially explored for developing secure and robust models (Goodfellow et al., 2014) , (Xiao et al., 2018) , (Saadatpanah et al., 2020) to prevent attacks. Miyato et al. (2016) , , Zhu et al. (2019) showed that adversarial training can improve both robustness and generalization for classification tasks, machine translation and GLUE benchmark respectively. Miyato et al. (2016) , Sachan et al. 2019, Miyato et al. (2018) then applied the adversarial training for semi-supervised image and text classification showing substantial improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 294, |
|
"text": "(Yang et al., 2021)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 325, |
|
"text": "(Van Engelen and Hoos, 2020)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 490, |
|
"text": "(Yarowsky, 1995", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 515, |
|
"text": ") (McClosky et al., 2006", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 537, |
|
"text": "Clark et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 818, |
|
"text": "(Laine and Aila, 2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 843, |
|
"end": 872, |
|
"text": "(Tarvainen and Valpola, 2017)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 1360, |
|
"end": 1385, |
|
"text": "(Goodfellow et al., 2014)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1388, |
|
"end": 1407, |
|
"text": "(Xiao et al., 2018)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 1410, |
|
"end": 1436, |
|
"text": "(Saadatpanah et al., 2020)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 1457, |
|
"end": 1477, |
|
"text": "Miyato et al. (2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1482, |
|
"end": 1499, |
|
"text": "Zhu et al. (2019)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 1659, |
|
"end": 1679, |
|
"text": "Miyato et al. (2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1702, |
|
"end": 1722, |
|
"text": "Miyato et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Emotion recognition is an important problem and has received lot of attention from the community (Yadollahi et al., 2017) , (Sailunaz et al., 2018) . The taxonomies of emotions suggested by Plutchik wheel of emotions (Plutchik, 1980) and (Ekman, 1984) have been used by the majority of the previous work in emotion recognition. Emotion recognition can be formulated as a multiclass problem (Scherer and Wallbott, 1994), (Mohammad, 2012) or a multilabel problem (Mohammad et al., 2018) , (Demszky et al., 2020) . In the multiclass formulation, the objective is to identify the presence of one of the emotion from the taxonomy whereas in a multilabel setting, more than one emotion can be present in the text instance. Binary relevance approach (Godbole and Sarawagi, 2004) is another way to break multilabel problem into multiple binary classification problems. However, this approach does not model the co-relation between emotions. Seq2Seq approaches (Yang et al., 2018) , (Huang et al., 2021) solve this problem by modelling the relationship between emotions by inferring emotion in an incremental manner. An interesting direction for handling data scarcity in emotion recognition is to use distant supervision by exploiting emojis (Felbo et al., 2017) , hashtags (Mohammad, 2012) or pretraining emotion specific embeddings and language models (Barbieri et al., 2021), (Ghosh et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 121, |
|
"text": "(Yadollahi et al., 2017)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 147, |
|
"text": "(Sailunaz et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 233, |
|
"text": "(Plutchik, 1980)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 251, |
|
"text": "(Ekman, 1984)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 484, |
|
"text": "(Mohammad et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 509, |
|
"text": "(Demszky et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 743, |
|
"end": 771, |
|
"text": "(Godbole and Sarawagi, 2004)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 952, |
|
"end": 971, |
|
"text": "(Yang et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 974, |
|
"end": 994, |
|
"text": "(Huang et al., 2021)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1234, |
|
"end": 1254, |
|
"text": "(Felbo et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1266, |
|
"end": 1282, |
|
"text": "(Mohammad, 2012)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1371, |
|
"end": 1391, |
|
"text": "(Ghosh et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "With the emergence of contextual models like BERT (Devlin et al., 2018) , Roberta etc., the field of NLP and text classification has been revolutionized as these models are able to learn efficient representations from a huge corpus of unlabelled data across different languages and domains (Hassan et al., 2021) , (Barbieri et al., 2021) . Social media content contains linguistic errors, idiosyncratic styles, spelling mistakes, grammatical inconsistency, slangs, hashtags, emoticons etc. (Barbieri et al., 2018) , (Derczynski et al., 2013) due to which off-the-shelf contextual models may not be optimum. We use languageadaptive, domain-adaptive and task-adaptive pretraining which has shown performance gains (Peters et al., 2019) , (Gururangan et al., 2020) , (Barbieri et al., 2021) , (Howard and Ruder, 2018) , (Lee et al., 2020) for different tasks and domains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 311, |
|
"text": "(Hassan et al., 2021)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 337, |
|
"text": "(Barbieri et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 513, |
|
"text": "(Barbieri et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 541, |
|
"text": "(Derczynski et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 733, |
|
"text": "(Peters et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 761, |
|
"text": "(Gururangan et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 787, |
|
"text": "(Barbieri et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 814, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 817, |
|
"end": 835, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We consider the task of multilabel emotion classification, where given a text t \u2208 T and t = {w 1 , w 2 , . . . , w l }, we predict the presence of y emotion categories denoted by {1, 2, . . . , y}. T represents the corpus of all the sentences across the different languages and w i represent the tokens in the sentence. We leverage contextual models as feature extractors \u03c6 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "t i \u2192 x i , where x i \u2208 R d and d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "is the dimension of the text representations and train a classifier over these representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Virtual Adversarial Training (VAT) (Miyato et al., 2018 ) is a regularization method for learning robust representations by encouraging the models to produce similar outputs for the input data points and local perturbations. VAT creates the adversary by perturbing the input in the direction which maximizes the change in the output of the model. Since VAT does not require labels it is well suited for semi-supervised applications. Consider x \u2208 R d as the d dimensional representation of the text and y as the ground truth. Objective function of VAT (L vadv ) is represented as,", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 55, |
|
"text": "(Miyato et al., 2018", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "L vadv (x, \u03b8) := D[p(y|x,\u03b8), p(y|x + r vadv , \u03b8)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(1) where,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "r vadv := arg maxD[p(y|x,\u03b8), p(y|x+r, \u03b8)] (2) and ||r|| 2 < and r vadv \u2208 R d . D[p, p']", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "measures the divergence between the two probability distributions and r vadv is the virtual adversarial perturbation that maximizes this divergence. In order to leverage the unlabelled data, the predictions from the current estimate of the model\u03b8 are used as the target. However, it is not possible to exactly compute r vadv by a closed form solution or linear approximation as gradient g (Equation 4) with respect to r is always zero at r = 0. Miyato et al. (2018) propose fast approximation method to formulate r adv as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 465, |
|
"text": "Miyato et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r vadv \u2248 g ||g|| 2 ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g = rD[p(y|x,\u03b8), p(y|x + r,\u03b8)]", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and r = * q, where q is a randomly sampled unit vector. With this approximation, we can use backpropagation to compute the gradients g in Equation 4. The overall training objective, L V AT becomes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "L V AT = L ce + \u03b1 * L vadv (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where L ce is the multiclass classification loss and L adv is the adversarial loss. \u03b1 is the balancing hyperparameter between the two losses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Virtual Adversarial Training (VAT)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We explore VAT for multilingual contextual models and multilabel classification. For computer vision tasks, perturbing the raw pixel values to generate adversarial examples is intuitive as the input space is continuous. However, contextual models use the indices of the words as input which are not present in the continuous domain and thus perturbing them is not optimal. Perturbing an index k of a word w k to k + r vadv would not result in a word closer to w k . To overcome this problem, instead of perturbing the input, we perturb the intermediate layer of the contextual models which form a continuous representation space and allows us to use VAT with contextual models. Similar strategy for text classification was also explored by Miyato et al. (2016) . For modelling multilabel classification, we measure the divergence of multilabel outputs by Mean Square Error (MSE),", |
|
"cite_spans": [ |
|
{ |
|
"start": 740, |
|
"end": 760, |
|
"text": "Miyato et al. (2016)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilabel Virtual Adversarial Training (mlVAT)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L vadv (x, \u03b8) := M SE[p(y|x,\u03b8), p(y|x+r vadv , \u03b8)]", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Multilabel Virtual Adversarial Training (mlVAT)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "MSE is calculated over the logits normalized by sigmoid. This is important as the outputs in case of multilabel classification are not probability distributions across classes which renders the usage of KL-Divergence incompatible for this scenario. We also experiment by treating the probability for each emotion separately but our results demonstrate the effectiveness of Mean Square Error (MSE) for our task (Table 4 ). The overall training objective, L mlVAT is:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 410, |
|
"end": 418, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multilabel Virtual Adversarial Training (mlVAT)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L mlVAT = L bce + \u03b1 * L vadv (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilabel Virtual Adversarial Training (mlVAT)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where, L bce is the multilabel binary cross entropy loss. We represent the text instances using monolingual/multilingual contextual representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilabel Virtual Adversarial Training (mlVAT)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "mlVAT: For each target language L tgt , we randomly select a percentage of samples from the training set of this language and use them as labelled examples for training. We use the remaining data of the same language and the complete dataset of the other languages L ul as the unlabelled set. Each training batch is created by maintaining a ratio between labelled and unlabelled examples for stable training. For the labelled set, both multilabel classification loss L bce and adversarial loss L vadv is applied. For the unlabelled examples, only the adversarial loss L vadv is used. Sup: We also train supervised classifiers (Sup) by using the same amount of labelled data for target language L tgt . Supervised classifiers (Sup) act as baseline and help in measuring the gains obtained by semi-supervised learning. We vary the ratio of sampled labelled examples as 10%, 25%, 50% and 100% to study the progression of our framework across different amount of labelled data of the target language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Semi-Supervised Setup", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For leveraging cross-learning between multiple languages in a semi-supervised setup, we experiment with different multilingual models. We experiment with off-the-shelf multilingual BERT, mBERT (Devlin et al., 2018) and XLM-R (Conneau et al., 2019) models which have been trained with corpus from multiple languages. Since we are performing emotion recognition on multilingual tweets, we evaluate the domain-adaptive multilingual model XLM-Tw (Barbieri et al., 2021) trained using a 198M tweet corpus across 30 languages over the XLM-R checkpoint. For exploring the effect of task-adaptive pretraining, we evaluate XLM-Tw-S, which is finetuned for sentiment analysis over tweets which is arguably a task related to emotion recognition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 214, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Representation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We also experiment with monolingual models trained over the corpus from the same language for comparison with multilingual models and setting up the baselines for each language: English BERT (E-BERT) (Devlin et al., 2018) for English, BetoBERT (Ca\u00f1ete et al., 2020) for Spanish and AraBERT (Antoun et al., 2020) for Arabic. We experiment with and without finetuning the representations to evaluate the performance of these representations out-of-the box and finetuning over our task. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 221, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 265, |
|
"text": "(Ca\u00f1ete et al., 2020)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 311, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Representation", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We evaluate on the SemEval2018 dataset (Affect in Tweets: Task E-c) (Mohammad et al., 2018) dataset. The dataset consists of tweets scraped from twitter in English, Spanish and Arabic. Each tweet is annotated with the presence of 11 emotions anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise and trust. Some tweets are neutral and do not have the presence of any emotion. The dataset has 3 splits -train, dev and test (Table 15) . Following Mohammad et al. (2018) , we measure the multilabel accuracy using Jaccard Index (JI), Macro F1 (MaF1) and Micro F1 (MiF1) scores (Chinchor, 1992) over the test set of these languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 91, |
|
"text": "(Mohammad et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 499, |
|
"text": "Mohammad et al. (2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 622, |
|
"text": "(Chinchor, 1992)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 464, |
|
"text": "(Table 15)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and Evaluation", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We select a percentage (10%, 25%, 50%, 100%) of the data from the target language as labelled data and use the remaining data from same language along with data of other languages as the unlabelled data. In Table 1 for Arabic, we see that by using 10%, 25%, 50% and 100% of the labelled data, mlVAT improves upon the results of training over the same amount of supervised data by 6.2%, 2.8%, 2.2% and 2.7% (Jaccard Index;JI) respectively. Similar improvements are also observed on the micro F1 (MiF1) and macro F1 (MaF1). It is interesting to note that by using only 50% of the labelled data with unlabelled data, we are able to match the performance of supervised learning with 100% of the data for Spanish. This shows that mlVAT is able to leverage the unlabelled data of Spanish and English for improving the performance over Arabic language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 214, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semi-Supervised Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Similar observations on English can be made from Table 2 also where we notice an improvement of 1.8%, 2.6%, 2.6% and 2% on the Jaccard Index and proportional improvements on other metrices also. For English also, we note that by using 10% of labelled data, mlVAT is able to improve on supervised results with 25% of the data. For Spanish, mlVAT helps for the 10% and 50% split as reported in Table 3 but is not able to improve all the metrics for the other splits. Overall, for majority of the languages and splits, we see that by adding unlabelled data, mlVAT improves upon the performance over supervised learning consistently and helps in decreasing the requirements for annotated data. Frozen backbone: We perform semi-supervised experiments with frozen backbone to investigate the effect of mlVAT on the backbone and classification head. We repeat similar experiments as in previous sections for Spanish and English, but freeze the backbone and only train the classification head. From the Figure 1 , we can observe that mlVAT consistently improves the performance for both languages over all the splits. This demonstrates that the performance gains are backbone-agnostic allowing for application of mlVAT on other back- Table 4 . MSE in presence of sigmoid shows superior performance than the other loss functions. The superior performance can be attributed to the normalization of the logits which encourages more stable activations and training. For experimenting with KLdivergence, we interpreted the normalized logits as probabilities but observed substantially poorer performance. We used English language with 10% of labelled examples and XLM-Tw model for these experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 399, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1003, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1226, |
|
"end": 1233, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semi-Supervised Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Ratio 1 2 3 4 5 JI 55.1 54.4 55.2 53.6 52.9 MiF1 66.9 66.4 67.0 65.8 65.3 MaF1 50.0 50.9 50.8 50.5 47.0 Unlabelled Batch Ratio: In Table 5 , we study the impact of ratio of the batch size of the unlabelled examples while keeping the batch size of the labelled data fixed. At higher ratios, the adversarial loss overpowers the supervised learning resulting in a performance drop. However, for the lower ratios, the we did not observe a consistent trend. Epsilon: We study the impact of epsilon ( ) on the performance in Table 6 . Higher values create more aggressive adversarial samples with high pertur-0.1 0.25 0.5 0.75 1 JI 54.9 54.9 55.2 54.7 54.6 MiF1 66.7 66.8 67.0 66.6 66.8 MaF1 50.4 50.3 50.8 50.3 49.9 Table 6 : Comparison of epsilon ( ) values on English with 10% labelled data bation while lower values may create insufficient perturbation. From our empirical experiments, we note that 0.5 works better than the other values and we use this for all our semi-supervised experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 138, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 526, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 711, |
|
"end": 718, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semi-Supervised Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section, we perform supervised learning experiments with frozen and finetuned representations by using the labelled data of each language for evaluating the performance of domainadaptive, task-adaptive, monolingual and multilingual contextual models. In Table 8 , 9 and 7, we present the results for different monolingual and multilingual contextual models for the three languages with frozen backbones. We use English BERT (E-BERT), BetoBERT and AraBERT as monolingual models for English, Spanish and Arabic respectively. We note that for all the languages, mBERT performs substantially poorer than the monolingual contextual models of the respective languages. However, XLM-R which is another multilingual model performs competitive with the monolingual models which is not surprising as XLM-R has shown improvements over mBERT in other language tasks also (Conneau et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 867, |
|
"end": 889, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 269, |
|
"text": "Table 8", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain and Task Adaptive Pretraining", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We further evaluate Domain-adaptive (XLM-Tw) and Task-adaptive (XLM-Tw-S) versions of the XLM-R multilingual model and observe substantial improvements. XLM-Tw-S improves the Jaccard Index (JI) by 5.5%, 6.5% and 8.4% for Arabic, English and Spanish respectively, highlighting the advantages of task-specific pretraining for contextual models. XLM-Tw also improves upon XLM-R for all the languages reiterating the importance of pretraining the contextual models with domain specific data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain and Task Adaptive Pretraining", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We study the impact of finetuning the monolingual and best performing multilingual model on our task to compare the capabilities of multilingual models with monolingual after finetuning on the task. We notice that finetuning bridges the gap to some extent but still the domain adaptive multilingual XLM-Tw works better than the finetuned monolingual models for all the languages as shown in Table 10 , 11 and 12. For English, the improvement is relatively moderate but for Spanish and Arabic, XLM-Tw demonstrates substantial gains.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 391, |
|
"end": 399, |
|
"text": "Table 10", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain and Task Adaptive Pretraining", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "English: Alhuzali and Ananiadou (2021) (SpanEmo) use sentences along with emotion categories as input to the contextual model and use label correlation aware loss (LCA) to model correlation among emotions classes. LVC-Seq2Emo (Huang et al., 2019) propose a latent variable chain transformation and use it with sequence to emotion for modelling correlation between emotions. BinC (Jabreel and Moreno, 2019) transform the multilabel classification problem into binary classification problems and train a recurrent neural network over this transformed setting. (Baziotis et al., 2018) Overall, our results improve upon the existing approaches on Jaccard Index(JI) by 7% for Spanish, 4.5% for Arabic and around 1% for English and setup a new state-of-the-art for all the three languages highlighting the efficacy of semi-supervised learning and domain-adaptive multilingual models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 246, |
|
"text": "(Huang et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 405, |
|
"text": "(Jabreel and Moreno, 2019)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 581, |
|
"text": "(Baziotis et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with existing methods", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We combine data of all the three languages and train a combined model and test this model on the test set of each language. We notice that the combined model improves upon the performance of individual models for Arabic and Spanish (Table 13) while the performance of English is comparable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 242, |
|
"text": "(Table 13)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Crosslingual Experiments", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Table 14 , we perform crosslingual experiments to evaluate the performance of a model trained on one language on another language. It is interesting to note that for Arabic and Spanish, the cross lingual performance is competitive with performance using some of the pretrained networks which is encouraging. We also observe that English demonstrates better crosslingual capability than Arabic and Spanish. A possible reason might be the large size of the English training dataset. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Table 14", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Crosslingual Experiments", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We perform experiments to evaluate the contribution of different layers of the XLM-Tw-S model. We extract representation of the tokens of a sentence from a particular layer of the contextual model and take an average across tokens for obtaining the representation of the sentence. We train a classifier over these sentence representations and report the results. From Figure 2 , we note that higher layers provide better performance for all the three languages showing that the higher-order contextual information is useful for understanding the emotions in the text. Refer Appendix A for detailed results. Similar to Tenney et al. (2019) , we also compute the improvement due to incrementally adding more layers to the previous layers and calculate the expected layer:", |
|
"cite_spans": [ |
|
{ |
|
"start": 618, |
|
"end": 638, |
|
"text": "Tenney et al. (2019)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 376, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E \u2206 [l] = L l=1 l * \u2206 (l) L l=1 \u2206 (l)", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Probing Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where, \u2206 (l) is the change in the Jaccard Index metric when adding layer l to the previous layers. We start from layer 0 and incrementally add higher layers for representing the tokens of the sentence Table 13 : Experiments on the combination of languages followed by averaging for representing the whole sentence. The expected layer for English, Spanish and Arabic computes to 6.9, 6.2 and 6.8 respectively showing that higher layers are useful for the task. This analysis is helpful to understand the improvement achieved by adding layers to the previous layers. For all the three languages, we obtain the best results on using the average of all the layers for representing the sentences which shows that different layers encapsulate complementary information about emotions. We finetune the contextual models following huggingface 2 with a batch size of 8, learning rate of 2e-5 and weight decay of 0.01 using AdamW optimizer for 30 epochs. The classifier is a two layered neural network with 768 hidden dimensions and 11 output dimensions with 0.1 dropout. For mlVAT experiments, the number of examples sampled from the unlabelled set for each batch are 24, and \u03b1 are set to 0.5 and 1 using cross validation. We apply sigmoid over the logits and train using binary cross entropy loss. We use validation set for finding optimal hyperparameters and evaluate on the test set using combination of training and validation set for training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 209, |
|
"text": "Table 13", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this work, we explored semi-supervised learning using Virtual Adversarial Training (VAT) for multilabel emotion classification in a multilingual setup and showed performance improvement by leveraging unlabelled data from different languages. We used Mean Square Error (MSE) as the divergence measure for leveraging VAT for multilabel emotion classification. We also evaluated the performance of monolingual, multilingual and domain-adaptive and task-adaptive multilingual contextual models across three languages -English, Spanish and Arabic for multilabel and multilingual emotion recognition and obtained state-of-the-art results. We also performed probing experiments for understanding the impact of different layers of contextual models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In recent years, deep learning approaches have played an important role in state-of-the-art natural language processing systems. However, obtaining labelled data for training these models is expensive and time consuming, especially for multilingual and multilabel scenarios. In such case, multilingual semi-supervised and unsupervised techniques can play a pivotal role. Our work introduces a semisupervised way for detecting and understanding textual data across multiple languages. Our methods could be used in sensitive contexts such as legal or healthcare settings, and it is essential that any work using our probe method undertake extensive quality assurance and robustness testing before using it in their setting. The datasets used in our work do not contain any sensitive information to the best of our knowledge. Table 16 , 17 and 18, we report the performance of each layer of frozen XLM-Tw-S model. We extract the layer representation of each token of the sentence and average them for representing the sentence. For all the languages, we note that the higher layers show superior performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 823, |
|
"end": 831, |
|
"text": "Table 16", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Broader Impact and Discussion of Ethics", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "https://competitions.codalab.org/competitions/17751", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://huggingface.co/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Spanemo: Casting multi-label emotion classification as span-prediction", |
|
"authors": [ |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Alhuzali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2101.10038" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hassan Alhuzali and Sophia Ananiadou. 2021. Spanemo: Casting multi-label emotion clas- sification as span-prediction. arXiv preprint arXiv:2101.10038.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Hybrid feature model for emotion recognition in arabic text", |
|
"authors": [], |
|
"year": 2020, |
|
"venue": "IEEE Access", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "37843--37854", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nourah Alswaidan and Mohamed El Bachir Menai. 2020. Hybrid feature model for emotion recognition in arabic text. IEEE Access, 8:37843-37854.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Arabert: Transformer-based model for arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.00104" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding. arXiv preprint arXiv:2003.00104.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "EMA at SemEval-2018 task 1: Emotion mining for Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Gilbert", |
|
"middle": [], |
|
"last": "Badaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Obeida", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Jundi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alaa", |
|
"middle": [], |
|
"last": "Khaddaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alaa", |
|
"middle": [], |
|
"last": "Maarouf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raslan", |
|
"middle": [], |
|
"last": "Kain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wassim", |
|
"middle": [], |
|
"last": "El-Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S18-1036" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gilbert Badaro, Obeida El Jundi, Alaa Khaddaj, Alaa Maarouf, Raslan Kain, Hazem Hajj, and Wassim El- Hajj. 2018. EMA at SemEval-2018 task 1: Emotion mining for Arabic. In Proceedings of The 12th In- ternational Workshop on Semantic Evaluation, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semeval 2018 task 2: Multilingual emoji prediction", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Ronzano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa Anke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Horacio", |
|
"middle": [], |
|
"last": "Saggion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Ronzano, Luis Espinosa Anke, Miguel Ballesteros, Valerio Basile, Viviana Patti, and Horacio Saggion. 2018. Semeval 2018 task 2: Multilingual emoji prediction. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 24-33.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Multilingual Language Model Toolkit for Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Barbieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa-Anke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXivpreprintarXiv:2104.12250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Barbieri, Luis Espinosa-Anke, and Jose Camacho-Collados. 2021. A Multilingual Lan- guage Model Toolkit for Twitter. In arXiv preprint arXiv:2104.12250.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, and Alexandros Potamianos", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Baziotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikos", |
|
"middle": [], |
|
"last": "Athanasiou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Chronopoulou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Athanasia", |
|
"middle": [], |
|
"last": "Kolovou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Ntua-slp at semeval-2018 task 1: Predicting affective content in tweets with deep attentive rnns and transfer learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.06658" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Baziotis, Nikos Athanasiou, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, and Alexandros Potamianos. 2018. Ntua-slp at semeval-2018 task 1: Predicting affec- tive content in tweets with deep attentive rnns and transfer learning. arXiv preprint arXiv:1804.06658.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Spanish pre-trained bert model and evaluation data", |
|
"authors": [ |
|
{ |
|
"first": "Jos\u00e9", |
|
"middle": [], |
|
"last": "Ca\u00f1ete", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Chaperon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "Fuentes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jou-Hui", |
|
"middle": [], |
|
"last": "Ho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hojin", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "P\u00e9rez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Robust neural machine translation with doubly adversarial inputs", |
|
"authors": [ |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.02443" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly ad- versarial inputs. arXiv preprint arXiv:1906.02443.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "MUC-4 evaluation metrics", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Fourth Message Uunderstanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Chinchor. 1992. MUC-4 evaluation metrics. In Fourth Message Uunderstanding Conference (MUC- 4): Proceedings of a Conference Held in McLean, Virginia, June 16-18, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Semi-supervised sequence modeling with cross-view training", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.08370" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D Man- ning, and Quoc V Le. 2018. Semi-supervised se- quence modeling with cross-view training. arXiv preprint arXiv:1809.08370.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02116" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "GoEmotions: A dataset of fine-grained emotions", |
|
"authors": [ |
|
{ |
|
"first": "Dorottya", |
|
"middle": [], |
|
"last": "Demszky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Movshovitz-Attias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeongwoo", |
|
"middle": [], |
|
"last": "Ko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Cowen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Nemade", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4040--4054", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.372" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeong- woo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4040-4054, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Twitter part-of-speech tagging for all: Overcoming sparse and noisy data", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the international conference recent advances in natural language processing ranlp 2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "198--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In Proceed- ings of the international conference recent advances in natural language processing ranlp 2013, pages 198-206.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Expression and the nature of emotion", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Ekman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Approaches to emotion", |
|
"volume": "3", |
|
"issue": "19", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Ekman. 1984. Expression and the nature of emo- tion. Approaches to emotion, 3(19):344.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", |
|
"authors": [ |
|
{ |
|
"first": "Bjarke", |
|
"middle": [], |
|
"last": "Felbo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Mislove", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iyad", |
|
"middle": [], |
|
"last": "Rahwan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sune", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1615--1625", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1169" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1615-1625, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Affect-LM: A neural language model for customizable affective text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sayan", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Chollet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Laksana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-LM: A neural language model for customiz- able affective text generation. Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Discriminative methods for multi-labeled classification", |
|
"authors": [ |
|
{ |
|
"first": "Shantanu", |
|
"middle": [], |
|
"last": "Godbole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Pacific-Asia conference on knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shantanu Godbole and Sunita Sarawagi. 2004. Dis- criminative methods for multi-labeled classification. In Pacific-Asia conference on knowledge discovery and data mining, pages 22-30. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "ELiRF-UPV at SemEval-2018 tasks 1 and 3: Affect and irony detection in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Jos\u00e9-\u00c1ngel", |
|
"middle": [], |
|
"last": "Gonz\u00e1lez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Llu\u00eds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ferran", |
|
"middle": [], |
|
"last": "Hurtado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "565--569", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S18-1092" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jos\u00e9-\u00c1ngel Gonz\u00e1lez, Llu\u00eds-F. Hurtado, and Ferran Pla. 2018. ELiRF-UPV at SemEval-2018 tasks 1 and 3: Affect and irony detection in tweets. In Pro- ceedings of The 12th International Workshop on Se- mantic Evaluation, pages 565-569, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Explaining and harnessing adversarial examples", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Shlens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6572" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "2020. Don't stop pretraining: adapt language models to domains and tasks", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Downey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.10964" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Cross-lingual emotion detection", |
|
"authors": [ |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaden", |
|
"middle": [], |
|
"last": "Shaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2106.06017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabit Hassan, Shaden Shaar, and Kareem Darwish. 2021. Cross-lingual emotion detection. arXiv preprint arXiv:2106.06017.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Universal language model fine-tuning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.06146" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Seq2emo: A sequence to multi-label emotion classification model", |
|
"authors": [ |
|
{ |
|
"first": "Chenyang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amine", |
|
"middle": [], |
|
"last": "Trabelsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuebin", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nawshad", |
|
"middle": [], |
|
"last": "Farruque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Osmar R Zaiane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4717--4724", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenyang Huang, Amine Trabelsi, Xuebin Qin, Naw- shad Farruque, Lili Mou, and Osmar R Zaiane. 2021. Seq2emo: A sequence to multi-label emotion classi- fication model. In Proceedings of the 2021 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 4717-4724.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Seq2emo for multi-label emotion classification based on latent variable chains transformation", |
|
"authors": [ |
|
{ |
|
"first": "Chenyang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amine", |
|
"middle": [], |
|
"last": "Trabelsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuebin", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nawshad", |
|
"middle": [], |
|
"last": "Farruque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Osmar R Za\u00efane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02147" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenyang Huang, Amine Trabelsi, Xuebin Qin, Nawshad Farruque, and Osmar R Za\u00efane. 2019. Seq2emo for multi-label emotion classification based on latent variable chains transformation. arXiv preprint arXiv:1911.02147.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A deep learning-based approach for multi-label emotion classification in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Jabreel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Applied Sciences", |
|
"volume": "9", |
|
"issue": "6", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammed Jabreel and Antonio Moreno. 2019. A deep learning-based approach for multi-label emo- tion classification in tweets. Applied Sciences, 9(6):1123.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Temporal ensembling for semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Samuli", |
|
"middle": [], |
|
"last": "Laine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "Aila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1610.02242" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuli Laine and Timo Aila. 2016. Temporal ensem- bling for semi-supervised learning. arXiv preprint arXiv:1610.02242.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Effective self-training for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Pro- ceedings of the Human Language Technology Con- ference of the NAACL, Main Conference, pages 152- 159.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "TCS research at SemEval-2018 task 1: Learning robust representations using multi-attention architecture", |
|
"authors": [ |
|
{ |
|
"first": "Hardik", |
|
"middle": [], |
|
"last": "Meisheri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lipika", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "291--299", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S18-1043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hardik Meisheri and Lipika Dey. 2018. TCS research at SemEval-2018 task 1: Learning robust represen- tations using multi-attention architecture. In Pro- ceedings of The 12th International Workshop on Se- mantic Evaluation, pages 291-299, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Adversarial training methods for semi-supervised text classification", |
|
"authors": [ |
|
{ |
|
"first": "Takeru", |
|
"middle": [], |
|
"last": "Miyato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1605.07725" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Good- fellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Takeru", |
|
"middle": [], |
|
"last": "Miyato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masanori", |
|
"middle": [], |
|
"last": "Shin-Ichi Maeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shin", |
|
"middle": [], |
|
"last": "Koyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ishii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE transactions on pattern analysis and machine intelligence", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "1979--1993", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "#emotional tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "246--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad. 2012. #emotional tweets. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Vol- ume 2: Proceedings of the Sixth International Work- shop on Semantic Evaluation (SemEval 2012), pages 246-255, Montr\u00e9al, Canada. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "SemEval-2018 task 1: Affect in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Bravo-Marquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Tw-StAR at SemEval-2018 task 1: Preprocessing impact on multi-label emotion classification", |
|
"authors": [ |
|
{ |
|
"first": "Hala", |
|
"middle": [], |
|
"last": "Mulki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chedi", |
|
"middle": [], |
|
"last": "Bechikh Ali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hatem", |
|
"middle": [], |
|
"last": "Haddad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ismail", |
|
"middle": [], |
|
"last": "Babaoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--171", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S18-1024" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hala Mulki, Chedi Bechikh Ali, Hatem Haddad, and Ismail Babaoglu. 2018. Tw-StAR at SemEval-2018 task 1: Preprocessing impact on multi-label emo- tion classification. In Proceedings of The 12th Inter- national Workshop on Semantic Evaluation, pages 167-171, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.05987" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A general psychoevolutionary theory of emotion", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Plutchik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Theories of emotion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion, pages 3-33. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Adversarial attacks on copyright detection systems", |
|
"authors": [ |
|
{ |
|
"first": "Parsa", |
|
"middle": [], |
|
"last": "Saadatpanah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Shafahi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8307--8315", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parsa Saadatpanah, Ali Shafahi, and Tom Goldstein. 2020. Adversarial attacks on copyright detection systems. In International Conference on Machine Learning, pages 8307-8315. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Revisiting lstm networks for semi-supervised text classification via mixed objective function", |
|
"authors": [ |
|
{ |
|
"first": "Devendra", |
|
"middle": [], |
|
"last": "Singh Sachan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manzil", |
|
"middle": [], |
|
"last": "Zaheer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "6940--6948", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devendra Singh Sachan, Manzil Zaheer, and Ruslan Salakhutdinov. 2019. Revisiting lstm networks for semi-supervised text classification via mixed objec- tive function. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 33, pages 6940-6948.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Emotion detection from text and speech: a survey", |
|
"authors": [ |
|
{ |
|
"first": "Kashfia", |
|
"middle": [], |
|
"last": "Sailunaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manmeet", |
|
"middle": [], |
|
"last": "Dhaliwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Rokne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reda", |
|
"middle": [], |
|
"last": "Alhajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Social Network Analysis and Mining", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "1--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kashfia Sailunaz, Manmeet Dhaliwal, Jon Rokne, and Reda Alhajj. 2018. Emotion detection from text and speech: a survey. Social Network Analysis and Min- ing, 8(1):1-26.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A context integrated model for multilabel emotion detection", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ahmed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Samy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samhaa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ehab", |
|
"middle": [], |
|
"last": "El-Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hassanien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Procedia computer science", |
|
"volume": "142", |
|
"issue": "", |
|
"pages": "61--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed E Samy, Samhaa R El-Beltagy, and Ehab Has- sanien. 2018. A context integrated model for multi- label emotion detection. Procedia computer science, 142:61-71.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Evidence for universality and cultural variation of differential emotion response patterning", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Klaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harald", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Scherer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wallbott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Journal of personality and social psychology", |
|
"volume": "66", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus R Scherer and Harald G Wallbott. 1994. Evi- dence for universality and cultural variation of differ- ential emotion response patterning. Journal of per- sonality and social psychology, 66(2):310.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", |
|
"authors": [ |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Tarvainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harri", |
|
"middle": [], |
|
"last": "Valpola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.01780" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antti Tarvainen and Harri Valpola. 2017. Mean teach- ers are better role models: Weight-averaged consis- tency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Bert rediscovers the classical nlp pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.05950" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "A survey on semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jesper E Van Engelen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Holger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hoos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Machine Learning", |
|
"volume": "109", |
|
"issue": "2", |
|
"pages": "373--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesper E Van Engelen and Holger H Hoos. 2020. A sur- vey on semi-supervised learning. Machine Learn- ing, 109(2):373-440.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Chaowei", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruizhi", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fisher", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingyan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawn", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "217--234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, and Dawn Song. 2018. Character- izing adversarial examples based on spatial consis- tency information for semantic segmentation. In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 217-234.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Current state of text sentiment analysis from opinion to emotion mining", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Yadollahi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ameneh", |
|
"middle": [ |
|
"Gholipour" |
|
], |
|
"last": "Shahraki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Osmar R Zaiane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACM Computing Surveys (CSUR)", |
|
"volume": "50", |
|
"issue": "2", |
|
"pages": "1--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Yadollahi, Ameneh Gholipour Shahraki, and Os- mar R Zaiane. 2017. Current state of text sentiment analysis from opinion to emotion mining. ACM Computing Surveys (CSUR), 50(2):1-33.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "SGM: Sequence generation model for multi-label classification", |
|
"authors": [ |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Houfeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: Sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "A survey on deep semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Xiangli", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zixing", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irwin", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zenglin", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2103.00550" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu. 2021. A survey on deep semi-supervised learn- ing. arXiv preprint arXiv:2103.00550.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Unsupervised word sense disambiguation rivaling supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "33rd annual meeting of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Improving multilabel emotion classification via sentiment classification with dual attention transfer network", |
|
"authors": [ |
|
{ |
|
"first": "Jianfei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Marujo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradeep", |
|
"middle": [], |
|
"last": "Karuturi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Brendel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianfei Yu, Luis Marujo, Jing Jiang, Pradeep Karu- turi, and William Brendel. 2018. Improving multi- label emotion classification via sentiment classifica- tion with dual attention transfer network. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Freelb: Enhanced adversarial training for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siqi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.11764" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Gold- stein, and Jingjing Liu. 2019. Freelb: Enhanced ad- versarial training for natural language understanding. arXiv preprint arXiv:1909.11764.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Comparison of Jaccard Index for English and Spanish across different ratio of labelled examples with frozen backbone. mlVAT (orange) improves upon supervised settings (Sup) (blue) for both languages.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Performance metrices across different layers for XLM-Tw-S. Circle, Triangle and Square represent JI, MiF1 and MaF1 respectively.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Sup 44.05 57.86 40.91 mlVAT 46.79 60.36 44.41", |
|
"html": null, |
|
"content": "<table><tr><td>%</td><td>Method</td><td>JI</td><td>MiF1 MaF1</td></tr><tr><td>10</td><td/><td/><td/></tr><tr><td>25</td><td colspan=\"3\">Sup mlVAT 51.08 63.96 47.31 49.69 62.80 44.19</td></tr><tr><td>50</td><td colspan=\"3\">Sup mlVAT 55.11 66.79 52.52 53.95 66.26 48.57</td></tr><tr><td>100</td><td colspan=\"3\">Sup mlVAT 57.31 68.18 52.15 55.78 67.41 50.12</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "mlVAT and Supervised (Sup) experiments on Arabic across different ratios of labelled examples", |
|
"html": null, |
|
"content": "<table><tr><td>%</td><td>Method</td><td>JI</td><td>MiF1 MaF1</td></tr><tr><td>10</td><td colspan=\"3\">Sup mlVAT 55.15 67.01 50.57 54.15 66.33 48.94</td></tr><tr><td>25</td><td colspan=\"3\">Sup mlVAT 56.54 68.52 51.18 55.11 66.99 47.83</td></tr><tr><td>50</td><td colspan=\"3\">Sup mlVAT 58.67 70.03 51.55 57.20 69.14 54.14</td></tr><tr><td>100</td><td colspan=\"3\">Sup mlVAT 60.70 71.90 56.10 59.78 71.19 53.43</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "mlVAT and Supervised (Sup) experiments on English across different ratios of labelled examples", |
|
"html": null, |
|
"content": "<table><tr><td>%</td><td>Method</td><td>JI</td><td>MiF1 MaF1</td></tr><tr><td>10</td><td colspan=\"3\">Sup mlVAT 46.05 54.83 42.49 44.36 53.17 38.28</td></tr><tr><td>25</td><td colspan=\"3\">Sup mlVAT 52.05 60.17 49.15 52.89 61.30 48.99</td></tr><tr><td>50</td><td colspan=\"3\">Sup mlVAT 55.70 63.39 54.19 55.17 63.20 51.70</td></tr><tr><td>100</td><td colspan=\"3\">Sup mlVAT 56.89 64.89 51.77 57.04 65.31 51.53</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "mlVAT and Supervised (Sup) experiments on Spanish across different ratios of labelled examples", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "Comparison of batch ratios on English with 10% labelled data", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "Performance of pretrained models on Arabic", |
|
"html": null, |
|
"content": "<table><tr><td>Model</td><td colspan=\"2\">JI MiF1 MaF1</td></tr><tr><td colspan=\"2\">XLM-Tw-S 53.9 66.2</td><td>47.8</td></tr><tr><td>XLM-Tw</td><td>50.6 63.5</td><td>45.9</td></tr><tr><td>XLM-R</td><td>48.6 61.9</td><td>45.7</td></tr><tr><td>mBERT</td><td>44.7 57.6</td><td>39.2</td></tr><tr><td>E-BERT</td><td>48.2 61.4</td><td>42.9</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"text": "Performance of pretrained models on English", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>: Performance of pretrained models on Spanish</td></tr><tr><td>features while Gonz\u00e1lez et al. (2018) (ELiRF) ex-</td></tr><tr><td>plored preprocessing and adapted the tokeniser</td></tr><tr><td>for Spanish tweets. MILAB was the wining en-</td></tr><tr><td>try in the SemEval2018 task. Hassan et al. (2021)</td></tr><tr><td>(CER) finetuned the Spanish BERT representations</td></tr><tr><td>(BetoBERT).</td></tr><tr><td>Arabic: For Arabic, Samy et al. (2018) (CA-GRU)</td></tr><tr><td>extract contextual information from the tweets</td></tr><tr><td>and uses them as context for emotion recogni-</td></tr><tr><td>tion using RNNs. Hassan et al. (2021) (CER) fine-</td></tr><tr><td>tuned BERT representations. Alswaidan and Menai</td></tr><tr><td>(2020) (HEF) proposed hybrid neural network us-</td></tr><tr><td>ing different embeddings. Badaro et al. (2018)</td></tr><tr><td>(EMA) used preprocessing techniques like normal-</td></tr><tr><td>isation, stemming etc.</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF12": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">: Results on English</td><td/></tr><tr><td>Model</td><td colspan=\"3\">JI MiF1 MaF1</td></tr><tr><td>mlVAT</td><td colspan=\"2\">56.9 64.9</td><td>51.8</td></tr><tr><td>XLM-Tw</td><td colspan=\"2\">57.0 65.3</td><td>51.5</td></tr><tr><td colspan=\"3\">BetoBERT 52.7 60.8</td><td>48.7</td></tr><tr><td>SpanEmo</td><td colspan=\"2\">53.2 64.1</td><td>53.2</td></tr><tr><td>CER</td><td>52.4</td><td>-</td><td>53.7</td></tr><tr><td>MILAB</td><td colspan=\"2\">46.9 55.8</td><td>40.7</td></tr><tr><td>ELiRF</td><td colspan=\"2\">45.8 53.5</td><td>44.0</td></tr><tr><td>TW-StAR</td><td colspan=\"2\">43.8 52.0</td><td>39.2</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF13": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF15": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td/><td>: Results on Arabic</td><td/></tr><tr><td colspan=\"3\">Language JI MiF1 MaF1</td></tr><tr><td>EN</td><td>59.4 70.6</td><td>55.7</td></tr><tr><td>ES</td><td>57.8 65.8</td><td>56.6</td></tr><tr><td>AR</td><td>57.8 68.6</td><td>55.5</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF17": { |
|
"text": "Crosslingual experiments between the languages", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Language Train Dev Test</td></tr><tr><td>English</td><td>6838 886 3259</td></tr><tr><td>Arabic</td><td>2278 585 1518</td></tr><tr><td>Spanish</td><td>3561 679 2854</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF18": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>: SemEval2018 dataset statistics</td></tr><tr><td>7 Training Details</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF20": { |
|
"text": "Comparison of layer performance for English using XLM-Tw-S model", |
|
"html": null, |
|
"content": "<table><tr><td>Layers</td><td>JI</td><td>MiF1 MaF1</td></tr><tr><td>Layer 0</td><td colspan=\"2\">39.66 48.99 38.49</td></tr><tr><td>Layer 1</td><td colspan=\"2\">40.43 49.45 36.94</td></tr><tr><td>Layer 2</td><td colspan=\"2\">42.19 50.57 37.20</td></tr><tr><td>Layer 3</td><td colspan=\"2\">43.03 51.83 39.42</td></tr><tr><td>Layer 4</td><td colspan=\"2\">43.94 53.11 40.99</td></tr><tr><td>Layer 5</td><td colspan=\"2\">46.38 55.37 42.88</td></tr><tr><td>Layer 6</td><td colspan=\"2\">46.68 56.13 44.94</td></tr><tr><td>Layer 7</td><td colspan=\"2\">47.51 57.24 45.78</td></tr><tr><td>Layer 8</td><td colspan=\"2\">48.21 57.70 46.32</td></tr><tr><td>Layer 9</td><td colspan=\"2\">48.13 57.35 44.92</td></tr><tr><td colspan=\"3\">Layer 10 51.97 60.54 49.01</td></tr><tr><td colspan=\"3\">Layer 11 50.86 59.59 47.70</td></tr><tr><td colspan=\"3\">Layer 12 51.16 60.39 50.59</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF21": { |
|
"text": "Comparison of layer performance for Spanish using XLM-Tw-S model Layer 10 51.05 63.42 46.01 Layer 11 50.94 63.56 48.30 Layer 12 50.32 62.71 45.82", |
|
"html": null, |
|
"content": "<table><tr><td>Layers</td><td>JI</td><td>MiF1 MaF1</td></tr><tr><td>Layer 0</td><td colspan=\"2\">42.30 55.45 41.04</td></tr><tr><td>Layer 1</td><td colspan=\"2\">43.42 56.48 41.20</td></tr><tr><td>Layer 2</td><td colspan=\"2\">44.47 57.77 42.11</td></tr><tr><td>Layer 3</td><td colspan=\"2\">45.80 58.93 43.32</td></tr><tr><td>Layer 4</td><td colspan=\"2\">45.76 58.81 44.03</td></tr><tr><td>Layer 5</td><td colspan=\"2\">47.56 60.51 45.21</td></tr><tr><td>Layer 6</td><td colspan=\"2\">48.13 61.02 44.36</td></tr><tr><td>Layer 7</td><td colspan=\"2\">47.51 60.73 46.22</td></tr><tr><td>Layer 8</td><td colspan=\"2\">49.36 62.18 45.01</td></tr><tr><td>Layer 9</td><td colspan=\"2\">49.57 62.27 46.51</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF22": { |
|
"text": "Comparison of layer performance for Arabic using XLM-Tw-S model In", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |