etomoscow's picture
Upload dataset
f1f37e1 verified
metadata
language:
  - en
  - uk
  - ru
  - de
  - zh
  - am
  - ar
  - hi
  - es
license: openrail++
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
dataset_info:
  features:
    - name: toxic_sentence
      dtype: string
    - name: neutral_sentence
      dtype: string
  splits:
    - name: zh
      num_bytes: 79089
      num_examples: 400
    - name: es
      num_bytes: 56826
      num_examples: 400
    - name: ru
      num_bytes: 89449
      num_examples: 400
    - name: ar
      num_bytes: 85231
      num_examples: 400
    - name: hi
      num_bytes: 107516
      num_examples: 400
    - name: uk
      num_bytes: 78082
      num_examples: 400
    - name: de
      num_bytes: 86818
      num_examples: 400
    - name: am
      num_bytes: 133489
      num_examples: 400
    - name: en
      num_bytes: 47435
      num_examples: 400
  download_size: 489123
  dataset_size: 763935
configs:
  - config_name: default
    data_files:
      - split: zh
        path: data/zh-*
      - split: es
        path: data/es-*
      - split: ru
        path: data/ru-*
      - split: ar
        path: data/ar-*
      - split: hi
        path: data/hi-*
      - split: uk
        path: data/uk-*
      - split: de
        path: data/de-*
      - split: am
        path: data/am-*
      - split: en
        path: data/en-*

MultiParaDetox

This is the multilingual parallel dataset for text detoxification prepared for CLEF TextDetox 2024 shared task. For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs).

Now, only dev set toxic sentences are released. Dev set references and test set toxic sentences will be released later with the test phase of the competition!

The list of the sources for the original toxic sentences: