Datasets:

Modalities:
Text
Formats:
csv
Languages:
Spanish
Size:
< 1K
Libraries:
Datasets
pandas
License:
dardem commited on
Commit
a943fa1
1 Parent(s): 4775a9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: openrail++
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail++
3
+ task_categories:
4
+ - text2text-generation
5
+ language:
6
+ - es
7
+ size_categories:
8
+ - 1K<n<10K
9
+ ---
10
+ **Spanish Parallel Text Detoxification**
11
+
12
+ Corpus for the text detoxification task for Spanish. Based on Spanish tweets [corpus](https://www.mdpi.com/1424-8220/19/21/4654) and Spanish hate speech [corpus](https://aclanthology.org/2022.lrec-1.785/).
13
+ For more details, check MultiParaDetox paper.
14
+
15
+ ## Citation
16
+
17
+ If you would like to acknowledge our work, please, cite the following manuscripts:
18
+
19
+ ```
20
+ @inproceedings{dementieva-etal-2024-multiparadetox,
21
+ title = "{M}ulti{P}ara{D}etox: Extending Text Detoxification with Parallel Data to New Languages",
22
+ author = "Dementieva, Daryna and
23
+ Babakov, Nikolay and
24
+ Panchenko, Alexander",
25
+ editor = "Duh, Kevin and
26
+ Gomez, Helena and
27
+ Bethard, Steven",
28
+ booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
29
+ month = jun,
30
+ year = "2024",
31
+ address = "Mexico City, Mexico",
32
+ publisher = "Association for Computational Linguistics",
33
+ url = "https://aclanthology.org/2024.naacl-short.12",
34
+ pages = "124--140",
35
+ abstract = "Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e.g. featuring rude words, to the neutral register. Recently, text detoxification methods found their applications in various task such as detoxification of Large Language Models (LLMs) (Leong et al., 2023; He et al., 2024; Tang et al., 2023) and toxic speech combating in social networks (Deng et al., 2023; Mun et al., 2023; Agarwal et al., 2023). All these applications are extremely important to ensure safe communication in modern digital worlds. However, the previous approaches for parallel text detoxification corpora collection{---}ParaDetox (Logacheva et al., 2022) and APPADIA (Atwell et al., 2022){---}were explored only in monolingual setup. In this work, we aim to extend ParaDetox pipeline to multiple languages presenting MultiParaDetox to automate parallel detoxification corpus collection for potentially any language. Then, we experiment with different text detoxification models{---}from unsupervised baselines to LLMs and fine-tuned models on the presented parallel corpora{---}showing the great benefit of parallel corpus presence to obtain state-of-the-art text detoxification models for any language.",
36
+ }
37
+ ```
38
+
39
+ ```
40
+ @inproceedings{dementieva2024overview,
41
+ title={Overview of the Multilingual Text Detoxification Task at PAN 2024},
42
+ author={Dementieva, Daryna and Moskovskiy, Daniil and Babakov, Nikolay and Ayele, Abinew Ali and Rizwan, Naquee and Schneider, Frolian and Wang, Xintog and Yimam, Seid Muhie and Ustalov, Dmitry and Stakovskii, Elisei and Smirnova, Alisa and Elnagar, Ashraf and Mukherjee, Animesh and Panchenko, Alexander},
43
+ booktitle={Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum},
44
+ editor={Guglielmo Faggioli and Nicola Ferro and Petra Galu{\v{s}}{\v{c}}{\'a}kov{\'a} and Alba Garc{\'i}a Seco de Herrera},
45
+ year={2024},
46
+ organization={CEUR-WS.org}
47
+ }
48
+ ```