dardem commited on
Commit
6e2c8c3
·
verified ·
1 Parent(s): b75e801

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -5
README.md CHANGED
@@ -40,10 +40,27 @@ model(batch)
40
  ## Citation
41
 
42
  ```
43
- @article{dementieva2024toxicity,
44
- title={Toxicity Classification in Ukrainian},
45
- author={Dementieva, Daryna and Khylenko, Valeriia and Babakov, Nikolay and Groh, Georg},
46
- journal={arXiv preprint arXiv:2404.17841},
47
- year={2024}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  }
49
  ```
 
40
  ## Citation
41
 
42
  ```
43
+ @inproceedings{dementieva-etal-2024-toxicity,
44
+ title = "Toxicity Classification in {U}krainian",
45
+ author = "Dementieva, Daryna and
46
+ Khylenko, Valeriia and
47
+ Babakov, Nikolay and
48
+ Groh, Georg",
49
+ editor = {Chung, Yi-Ling and
50
+ Talat, Zeerak and
51
+ Nozza, Debora and
52
+ Plaza-del-Arco, Flor Miriam and
53
+ R{\"o}ttger, Paul and
54
+ Mostafazadeh Davani, Aida and
55
+ Calabrese, Agostina},
56
+ booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)",
57
+ month = jun,
58
+ year = "2024",
59
+ address = "Mexico City, Mexico",
60
+ publisher = "Association for Computational Linguistics",
61
+ url = "https://aclanthology.org/2024.woah-1.19",
62
+ doi = "10.18653/v1/2024.woah-1.19",
63
+ pages = "244--255",
64
+ abstract = "The task of toxicity detection is still a relevant task, especially in the context of safe and fair LMs development. Nevertheless, labeled binary toxicity classification corpora are not available for all languages, which is understandable given the resource-intensive nature of the annotation process. Ukrainian, in particular, is among the languages lacking such resources. To our knowledge, there has been no existing toxicity classification corpus in Ukrainian. In this study, we aim to fill this gap by investigating cross-lingual knowledge transfer techniques and creating labeled corpora by: (i){\textasciitilde}translating from an English corpus, (ii){\textasciitilde}filtering toxic samples using keywords, and (iii){\textasciitilde}annotating with crowdsourcing. We compare LLMs prompting and other cross-lingual transfer approaches with and without fine-tuning offering insights into the most robust and efficient baselines.",
65
  }
66
  ```