|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:08:01.791652Z" |
|
}, |
|
"title": "A Dutch Dataset for Cross-lingual Multi-label Toxicity Detection", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Burtenshaw", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Antwerp Centre for Digital Humanities and Literary Criticism", |
|
"institution": "University of Antwerp", |
|
"location": { |
|
"addrLine": "Prinsstraat 13", |
|
"postCode": "2000", |
|
"settlement": "Antwerp", |
|
"country": "Belgium" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Kestemont", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Antwerp Centre for Digital Humanities and Literary Criticism", |
|
"institution": "University of Antwerp", |
|
"location": { |
|
"addrLine": "Prinsstraat 13", |
|
"postCode": "2000", |
|
"settlement": "Antwerp", |
|
"country": "Belgium" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Multi-label toxicity detection is highly prominent, with many research groups, companies, and individuals engaging with it through shared tasks and dedicated venues. This paper describes a cross-lingual approach to annotating multi-label text classification on a newly developed Dutch language dataset, using a model trained on English data. We present an ensemble model of one Transformer model and an LSTM using Multilingual embeddings. The combination of multilingual embeddings and the Transformer model improves performance in a cross-lingual setting.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Multi-label toxicity detection is highly prominent, with many research groups, companies, and individuals engaging with it through shared tasks and dedicated venues. This paper describes a cross-lingual approach to annotating multi-label text classification on a newly developed Dutch language dataset, using a model trained on English data. We present an ensemble model of one Transformer model and an LSTM using Multilingual embeddings. The combination of multilingual embeddings and the Transformer model improves performance in a cross-lingual setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Toxic comment detection is becoming an integral part of online discussion, and most major social media platforms use it. However, that success is not shared equally across languages. Low resource languages still lack the accurate pre-trained models that are readily available in more resourced languages, such as English. This is mostly due to a lack of annotated corpora. Inconsistent task definitions of task compound the problem. Where quality data does exist, it often uses alternative task definitions. This paper aims to overcome that challenge by annotating a new dataset and evaluating it within a cross-lingual experiment. We perform multi-label text classification, using an ensemble approach of Transformer and LSTM models with multilingual embeddings (Vaswani et al., 2017; Devlin et al., 2019; Van Hee et al., 2015a) . The system is trained on English data by Wulczyn et al. and evaluated on newly annotated Dutch text from the Amica corpus (Wulczyn et al., 2017a; Van Hee et al., 2015a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 763, |
|
"end": 785, |
|
"text": "(Vaswani et al., 2017;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 806, |
|
"text": "Devlin et al., 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 807, |
|
"end": 829, |
|
"text": "Van Hee et al., 2015a)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 977, |
|
"text": "(Wulczyn et al., 2017a;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 978, |
|
"end": 1000, |
|
"text": "Van Hee et al., 2015a)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We selected multi-label toxicity over other label definitions based on its adaptability and feedback from annotators. Toxicity draws its origins from chemistry, referring to how a substance can damage an organism. From experience in annotator training and feedback, this is a straightforward term to communicate to annotators who relate quickly to the concept of harmful language that degrades a conversation or debate, much like a poison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Conversation AI group defined multi-label toxicity, and Wulczyn et al. (Wulczyn et al., 2017c) . The term goes beyond its counterparts by adding fine-grained sub-labels. The original motivation of Wulczyn et al. was for multi-label toxicity to serve as a compatible annotation model for tasks beyond the original Wikipedia dataset. Unlike other similar initiatives, their work focused on the risk that communities break down or turn silent, \"leading many communities to limit or completely shut down user comments\" (Wulczyn et al., 2017a,c) . For a detailed overview of multi-label toxicity, look to van Aken et al., or Gunasekera et al. (Georgakopoulos et al., 2018; Wulczyn et al., 2017b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 98, |
|
"text": "(Wulczyn et al., 2017c)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 544, |
|
"text": "(Wulczyn et al., 2017a,c)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 671, |
|
"text": "(Georgakopoulos et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 694, |
|
"text": "Wulczyn et al., 2017b)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A current challenge within the sub-field of toxicity detection is the definition and operationalisation as a concrete task. Though there is research within the area, many projects take up alternative interpretations and definitions. This has led to grey areas between terms like offensive language and profanity, cyberbullying, and online harassment. In practice, many projects are classifying the same data and phenomena under alternative definitions. This problem is explored in greater detail by Emmery and colleagues (Emmery et al., 2019 and Dadu, 2020; Stappen et al., 2020) . Amica was a collaborative project between Dutch-speaking NLP research groups into cyberbullying. Van Hee et al. facilitated the detailed annotation of many data sets for a range of bullying labels, using real and simulated conversations between children. Table 1 gives the label distribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 521, |
|
"end": 541, |
|
"text": "(Emmery et al., 2019", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 557, |
|
"text": "and Dadu, 2020;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 579, |
|
"text": "Stappen et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 837, |
|
"end": 844, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use a newly annotated version of the AMiCA dataset, initially developed by Van Hee et al., for cyberbullying tasks. In addition, we performed further annotation for multi-label toxicity, following the label guidelines of Wulczyn et al..", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Van Hee et al. developed the AMiCA dataset through anonymous donation and simulation outlined by Emmery et al.. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AMiCA Instant Messages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To annotate the AMiCA dataset for Multi-label toxicity labels, we used the annotation instructions outlined in (Wulczyn et al., 2017c) . We translated the instructions into Dutch, the native language of the annotators, and gave detailed guidance with an introductory tutorial and handout. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 134, |
|
"text": "(Wulczyn et al., 2017c)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-label Toxicity Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Describes an intention to inflict pain, injury, or violence against an individual or group. We stored the annotated data in a SQL table using the row index of the original AMiCA annotations for cyberbullying. Table 4 shows the distribution of labels across the English data by Wulczyn et al. and the newly annotated data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 216, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "THREAT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Interannotator Agreement We calculated interannotator agreement using the largest set of overlapping instances by the same two annotators achieving a Krippendorf score of 0.4483, revealing that there was substantial agreement between annotators. We can compare this to that of Wulczyn et al., which scored 0.45 (Wulczyn et al., 2017a) . We can delve further into inter-annotator relations through multi-label use. Figure 1 reveals Compare Toxicity and Cyberbullying As a precursor to the main experiments, and to align the new annotation with Van Hee et al., we tested how cyberbullying acts as a naive predictor of toxicity using the combined labels for each class and F1 Score (Van Hee et al., 2015b; Emmery et al., 2019) . We calculated an F1 score of 0.51, revealing that multi-label toxicity does not align with cyberbullying.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 334, |
|
"text": "(Wulczyn et al., 2017a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 679, |
|
"end": 702, |
|
"text": "(Van Hee et al., 2015b;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 723, |
|
"text": "Emmery et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 430, |
|
"text": "Figure 1 reveals", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "THREAT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We performed cross-lingual classification using an Ensemble approach of two component models, a fine-tuned multilingual BERT-base and an LSTM model using Multilingual Unsupervised and Supervised Embeddings (MUSE) . We also used two baseline models for comparison, an LSTM without multilingual embeddings and a Support Vector Machine.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We fine-tuned a Multilingual BERT-base model and 3 linear layers. A Bidirectional Encoder Representation from Transformers or BERT model is a pre-trained model that uses bidirectional training to learn contextual attention at a word and sub-word level (Devlin et al., 2019) . We used sub-word token representation that aligns with the base vocabulary representation (Zhang et al., 2020) . We fine-tuned the BERT model for 4 epochs over a 10-fold crossvalidated dataset. The mean validation and training loss for all folds of the data was 0.05.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 273, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 386, |
|
"text": "(Zhang et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuned BERT-base", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We trained a Long Short-term Memory (LSTM) network with Multilingual Universal Sentence Embeddings (MUSE) (Hochreiter and Schmidhuber, 1997; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 140, |
|
"text": "(Hochreiter and Schmidhuber, 1997;", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LSTM and MUSE Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We train the LSTM model for 12 epochs over a 10fold cross-validated dataset. The mean validation and training loss for all splits of the data was 0.03.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LSTM and MUSE Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We used a Random Forest ensemble of the LSTM and BERT models on a cross-validated training set with grid-searched parameters (Breiman, 2001; Nowak et al., 2017) . A key risk in ensemble training is overfitting (Pourtaheri and Zahiri, 2016), to mitigate this all models have used a stratified k-fold structure (Yadav and Shukla, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 140, |
|
"text": "(Breiman, 2001;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 160, |
|
"text": "Nowak et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 333, |
|
"text": "(Yadav and Shukla, 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We used a stratified k-fold configuration of the English and Dutch data to train and fine-tune models. First, we trained and fine-tuned models on a 'train' portion and collected the predicted labels on 'test' portions of the folds, split for English and Dutch data. This allowed us to reveal language performance separately. Next, we trained the ensemble model on component model predictions. Finally, we used an exhaustive grid search to select hyperparameters (Bergstra and Bengio, 2012) and a Receiver Under the Curve analysis (ROC) to select decision thresholds from the component models (Fawcett, 2006) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 462, |
|
"end": 489, |
|
"text": "(Bergstra and Bengio, 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 607, |
|
"text": "(Fawcett, 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Fine-tuning", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We performed error analysis to interpret model performance in relation to labels and the language of comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Sub-label Performance Figure 2 reveals the Precision, Recall, and F1 Score of the Ensemble model on all labels. Furthermore, we can see that the model performs better at negative label prediction, a common trait in transformer model classification. their scores on the English and Dutch data, shown in Table 6 . Logically, the LSTM with MUSE embeddings performs poorly on English data, without relevant embedding weights. On the other hand, the BERT model performs well in both languages, and the Ensemble model relies on that when classifying English Data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 30, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 6", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have demonstrated that by using multilingual pre-trained language models within an ensemble approach, we can classify multi-label toxicity in an alternate language. Furthermore, we have demonstrated that the BERT model's underlying training affects target language performance by analysing the performance of baseline, component and ensemble models in cross-lingual features. Furthermore, Table 5 reveals that component models were able to overcome an excess of false positives that hindered baseline methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 399, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Random Search for Hyper-Parameter Optimization", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bergstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Bergstra and Yoshua Bengio. 2012. Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research 13 (2012), 25.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Random Forests", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Machine Learning", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "5--32", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1023/A:1010933404324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman. 2001. Random Forests. Machine Learn- ing 45, 1 (Oct. 2001), 5-32. https://doi.org/ 10.1023/A:1010933404324", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv:1810.04805 [cs] (May 2019). arXiv:1810.04805 [cs]", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Current Limitations in Cyberbullying Detection: On Evaluation Criteria, Reproducibility, and Data Scarcity", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Emmery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Verhoeven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Pauw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gilles", |
|
"middle": [], |
|
"last": "Jacobs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cynthia", |
|
"middle": [], |
|
"last": "Van Hee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Els", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Desmet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.11922" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Emmery, Ben Verhoeven, Guy De Pauw, Gilles Jacobs, Cynthia Van Hee, Els Lefever, Bart Desmet, V\u00e9ronique Hoste, and Walter Daelemans. 2019. Current Limitations in Cyberbullying Detection: On Evaluation Criteria, Reproducibility, and Data Scarcity. arXiv:1910.11922 [cs] (Oct. 2019). arXiv:1910.11922 [cs]", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An Introduction to ROC Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Fawcett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Pattern recognition letters", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "861--874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Fawcett. 2006. An Introduction to ROC Analysis. Pattern recognition letters 27, 8 (2006), 861-874.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Convolutional Neural Networks for Toxic Comment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Spiros", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Georgakopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sotiris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aristidis", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Tasoulis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vassilis", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Vrahatis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Plagianakos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 10th Hellenic Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Spiros V. Georgakopoulos, Sotiris K. Tasoulis, Aris- tidis G. Vrahatis, and Vassilis P. Plagianakos. 2018. Convolutional Neural Networks for Toxic Comment Classification. In Proceedings of the 10th Hellenic Conference on Artificial Intelligence. 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Long Short-Term Memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735-1780. https://doi.org/10.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Unsupervised Machine Translation Using Monolingual Corpora Only", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised Machine Translation Using Monolingual Corpora Only. arXiv preprint arXiv:1711.00043 (2017). arXiv:1711.00043", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "LSTM Recurrent Neural Networks for Short Text and Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Nowak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet", |
|
"middle": [], |
|
"last": "Taspinar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafa\u0142", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Artificial Intelligence and Soft Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakub Nowak, Ahmet Taspinar, and Rafa\u0142 Scherer. 2017. LSTM Recurrent Neural Networks for Short Text and Sentiment Classification. In Ar- tificial Intelligence and Soft Computing (Lecture Notes in Computer Science), Leszek Rutkowski, Marcin Korytkowski, Rafa\u0142 Scherer, Ryszard Tadeusiewicz, Lotfi A. Zadeh, and Jacek M.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Cross-Domain and Cross-Lingual Abusive Language Detection: A Hybrid Approach with Deep Learning and a Multilingual Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Wahyu", |
|
"middle": [], |
|
"last": "Endang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Pamungkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Endang Wahyu Pamungkas and Viviana Patti. 2019. Cross-Domain and Cross-Lingual Abusive Lan- guage Detection: A Hybrid Approach with Deep Learning and a Multilingual Lexicon. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Association for Computational Linguis- tics, Florence, Italy, 363-370.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Cross-Lingual Inductive Transfer to Detect Offensive Language", |
|
"authors": [ |
|
{ |
|
"first": "Kartikey", |
|
"middle": [], |
|
"last": "Pant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanvi", |
|
"middle": [], |
|
"last": "Dadu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.03771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kartikey Pant and Tanvi Dadu. 2020. Cross- Lingual Inductive Transfer to Detect Offensive Language. arXiv:2007.03771 [cs] (July 2020). arXiv:2007.03771 [cs]", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Ensemble Classifiers with Improved Overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Seyed", |
|
"middle": [], |
|
"last": "Zeinab Khatoun Pourtaheri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hamid Zahiri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC). 93-97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CSIEC.2016.7482130" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeinab Khatoun Pourtaheri and Seyed Hamid Zahiri. 2016. Ensemble Classifiers with Improved Over- fitting. In 2016 1st Conference on Swarm Intelli- gence and Evolutionary Computation (CSIEC). 93- 97. https://doi.org/10.1109/CSIEC.2016. 7482130", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Cross-Lingual Zero-and Few-Shot Hate Speech Detection Utilising Frozen Transformer Language Models and AXEL", |
|
"authors": [ |
|
{ |
|
"first": "Lukas", |
|
"middle": [], |
|
"last": "Stappen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Brunn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Schuller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.13850" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lukas Stappen, Fabian Brunn, and Bj\u00f6rn Schuller. 2020. Cross-Lingual Zero-and Few-Shot Hate Speech Detection Utilising Frozen Transformer Lan- guage Models and AXEL. arXiv:2004.13850 [cs, stat] (April 2020). arXiv:2004.13850 [cs, stat]", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Automatic Detection and Prevention of Cyberbullying", |
|
"authors": [ |
|
{ |
|
"first": "Cynthia", |
|
"middle": [], |
|
"last": "Van Hee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Els", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Verhoeven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Mennes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Desmet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and V\u00e9ronique Hoste. 2015a. Automatic De- tection and Prevention of Cyberbullying.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Detection and Fine-Grained Classification of Cyberbullying Events", |
|
"authors": [ |
|
{ |
|
"first": "Cynthia", |
|
"middle": [], |
|
"last": "Van Hee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Els", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Verhoeven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Mennes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Desmet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing. INCOMA Ltd", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "672--680", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and Veronique Hoste. 2015b. Detection and Fine-Grained Classification of Cyberbullying Events. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing. INCOMA Ltd. Shoumen, BULGARIA, Hissar, Bulgaria, 672-680.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Attention Is All You Need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.03762" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs] (Dec. 2017). arXiv:1706.03762 [cs]", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Ex Machina: Personal Attacks Seen at Scale", |
|
"authors": [ |
|
{ |
|
"first": "Ellery", |
|
"middle": [], |
|
"last": "Wulczyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1610.08914" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017a. Ex Machina: Personal Attacks Seen at Scale. arXiv:1610.08914 [cs] (Feb. 2017). arXiv:1610.08914 [cs]", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Ex Machina: Personal Attacks Seen at Scale", |
|
"authors": [ |
|
{ |
|
"first": "Ellery", |
|
"middle": [], |
|
"last": "Wulczyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 26th International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1391--1399", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017b. Ex Machina: Personal Attacks Seen at Scale. In Proceedings of the 26th International Conference on World Wide Web. 1391-1399.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Wikipedia Talk Labels: Personal Attacks", |
|
"authors": [ |
|
{ |
|
"first": "Ellery", |
|
"middle": [], |
|
"last": "Wulczyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.6084/M9.FIGSHARE.4054689" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017c. Wikipedia Talk Labels: Personal At- tacks. https://doi.org/10.6084/M9.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Analysis of K-Fold Cross-Validation over Hold-Out Validation on Colossal Datasets for Quality Classification", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Yadav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Shukla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE 6th International Conference on Advanced Computing (IACC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--83", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/IACC.2016.25" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Yadav and S. Shukla. 2016. Analysis of K- Fold Cross-Validation over Hold-Out Validation on Colossal Datasets for Quality Classification. In 2016 IEEE 6th International Conference on Advanced Computing (IACC). 78-83. https://doi.org/ 10.1109/IACC.2016.25", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Semantics-Aware BERT for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhuosheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuwei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zuchao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuailiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.02209" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-Aware BERT for Language Un- derstanding. arXiv:1909.02209 [cs] (Feb. 2020). arXiv:1909.02209 [cs]", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Correlation Matrix of Toxic labels on Annotated Amica Dataset" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Classification Report from Ensemble Approach on all toxicity labels" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "reveals the macro details of the data used with original cyberbullying token labels.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Bullying Tokens</td><td>2,343</td></tr><tr><td>Negative Tokens</td><td>2,546</td></tr><tr><td>All Tokens</td><td>62,340</td></tr><tr><td>Mean Tokens per msg</td><td>12</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "AMiCA data lexical statistics", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "describes the sub-labels: Toxicity, Severe Toxicity, Identity Attack, Insult, Profanity, and Threat.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>TOXICITY</td></tr><tr><td>Rude, disrespectful, or unreasonable comment that</td></tr><tr><td>is likely to make people leave a discussion.</td></tr><tr><td>SEVERE_TOXICITY</td></tr><tr><td>A very hateful, aggressive, disrespectful comment</td></tr><tr><td>or otherwise very likely to make a user leave a</td></tr><tr><td>discussion.</td></tr><tr><td>IDENTITY_ATTACK</td></tr><tr><td>Negative or hateful comments targeting someone</td></tr><tr><td>because of their identity.</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Annotated Labels in Dutch (New) and English</td></tr><tr><td>(Wulczyn 2017) data. n shows the number of com-</td></tr><tr><td>ments for each label and % shows the percentage of</td></tr><tr><td>the total comments for that label.</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "and the ensemble approach achieved the highest score. We also find that component models were able to overcome the low precision score seen in baseline methods.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>reveals results for baselines,</td></tr><tr><td>component models, and ensemble model. We</td></tr><tr><td>express results as Area Under the Curve, mean</td></tr><tr><td>Precision, mean Recall, mean F1 for all labels.</td></tr><tr><td>Baseline models are a Support Vector Machine of</td></tr><tr><td>Continuous Bag-of-Words representations and an</td></tr><tr><td>LSTM without Multilingual Universal Sentence</td></tr><tr><td>Embeddings. Both component models achieved</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "Results Table of baselines, component, and ensemble models. Results are expressed as AUC, mean Precision, mean Recall, mean F1 for all labels.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"text": "Cross-lingual Performance: F1 Scores of underlinecomponent and ensemble models. EN are scores on the Wulczyn data, NL are score on the new Dutch data.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |