dardem commited on
Commit
a7346ee
1 Parent(s): e80aeb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -14,6 +14,14 @@ size_categories:
14
 
15
  This repository contains information about ParaDetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
16
 
 
 
 
 
 
 
 
 
17
  ## ParaDetox Collection Pipeline
18
 
19
  The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
@@ -28,14 +36,6 @@ As a result, we get paraphrases for 11,939 toxic sentences (on average 1.66 par
28
 
29
  In addition to all ParaDetox dataset, we also make public [samples](https://huggingface.co/datasets/s-nlp/en_non_detoxified) that were marked by annotators as "cannot rewrite" in *Task 1* of crowdsource pipeline.
30
 
31
- **Update 2024: Multilinugal ParaDetox**
32
-
33
- We have also created versions of ParaDetox in more languages. You can checkout a [RuParaDetox](https://huggingface.co/datasets/s-nlp/ru_paradetox) dataset as well as a [Multilingual TextDetox](https://huggingface.co/textdetox) project that includes 9 languages.
34
-
35
- Corresponding papers:
36
- * [MultiParaDetox: Extending Text Detoxification with Parallel Data to New Languages](https://aclanthology.org/2024.naacl-short.12/) (NAACL 2024)
37
- * [Overview of the multilingual text detoxification task at pan 2024](https://ceur-ws.org/Vol-3740/paper-223.pdf) (CLEF Shared Task 2024)
38
-
39
  # Detoxification evaluation
40
 
41
  The automatic evaluation of the model were produced based on three parameters:
 
14
 
15
  This repository contains information about ParaDetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
16
 
17
+ 📰 **Updates**
18
+
19
+ **[2024]** We have also created versions of ParaDetox in more languages. You can checkout a [RuParaDetox](https://huggingface.co/datasets/s-nlp/ru_paradetox) dataset as well as a [Multilingual TextDetox](https://huggingface.co/textdetox) project that includes 9 languages.
20
+
21
+ Corresponding papers:
22
+ * [MultiParaDetox: Extending Text Detoxification with Parallel Data to New Languages](https://aclanthology.org/2024.naacl-short.12/) (NAACL 2024)
23
+ * [Overview of the multilingual text detoxification task at pan 2024](https://ceur-ws.org/Vol-3740/paper-223.pdf) (CLEF Shared Task 2024)
24
+
25
  ## ParaDetox Collection Pipeline
26
 
27
  The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
 
36
 
37
  In addition to all ParaDetox dataset, we also make public [samples](https://huggingface.co/datasets/s-nlp/en_non_detoxified) that were marked by annotators as "cannot rewrite" in *Task 1* of crowdsource pipeline.
38
 
 
 
 
 
 
 
 
 
39
  # Detoxification evaluation
40
 
41
  The automatic evaluation of the model were produced based on three parameters: