dardem's picture
Upload dataset
a11d71e verified
metadata
language:
  - en
  - uk
  - ru
  - de
  - zh
  - am
  - ar
  - hi
  - es
license: openrail++
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
dataset_info:
  features:
    - name: toxic_sentence
      dtype: string
    - name: neutral_sentence
      dtype: string
  splits:
    - name: en
      num_bytes: 47435
      num_examples: 400
    - name: ru
      num_bytes: 89453
      num_examples: 400
    - name: uk
      num_bytes: 78106
      num_examples: 400
    - name: de
      num_bytes: 86818
      num_examples: 400
    - name: es
      num_bytes: 56868
      num_examples: 400
    - name: am
      num_bytes: 133489
      num_examples: 400
    - name: zh
      num_bytes: 79089
      num_examples: 400
    - name: ar
      num_bytes: 85237
      num_examples: 400
    - name: hi
      num_bytes: 107518
      num_examples: 400
  download_size: 489288
  dataset_size: 764013
configs:
  - config_name: default
    data_files:
      - split: en
        path: data/en-*
      - split: ru
        path: data/ru-*
      - split: uk
        path: data/uk-*
      - split: de
        path: data/de-*
      - split: es
        path: data/es-*
      - split: am
        path: data/am-*
      - split: zh
        path: data/zh-*
      - split: ar
        path: data/ar-*
      - split: hi
        path: data/hi-*

MultiParaDetox

This is the multilingual parallel dataset for text detoxification prepared for CLEF TextDetox 2024 shared task. For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs).

!!!April, 23rd, update: We are realsing the parallel dev set! The test part for the final phase of the competition is available here!!!

The list of the sources for the original toxic sentences: