kor_nli / README.md
albertvillanova's picture
Convert dataset to Parquet (#7)
3e0e462 verified
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - machine-generated
  - expert-generated
language:
  - ko
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
source_datasets:
  - extended|multi_nli
  - extended|snli
  - extended|xnli
task_categories:
  - text-classification
task_ids:
  - natural-language-inference
  - multi-input-text-classification
paperswithcode_id: kornli
pretty_name: KorNLI
dataset_info:
  - config_name: multi_nli
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': neutral
              '2': contradiction
    splits:
      - name: train
        num_bytes: 84728887
        num_examples: 392702
    download_size: 54693610
    dataset_size: 84728887
  - config_name: snli
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': neutral
              '2': contradiction
    splits:
      - name: train
        num_bytes: 80136649
        num_examples: 550152
    download_size: 22015955
    dataset_size: 80136649
  - config_name: xnli
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': neutral
              '2': contradiction
    splits:
      - name: validation
        num_bytes: 518822
        num_examples: 2490
      - name: test
        num_bytes: 1047429
        num_examples: 5010
    download_size: 529321
    dataset_size: 1566251
configs:
  - config_name: multi_nli
    data_files:
      - split: train
        path: multi_nli/train-*
  - config_name: snli
    data_files:
      - split: train
        path: snli/train-*
  - config_name: xnli
    data_files:
      - split: validation
        path: xnli/validation-*
      - split: test
        path: xnli/test-*

Dataset Card for "kor_nli"

Table of Contents

Dataset Description

Dataset Summary

Korean Natural Language Inference datasets.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

multi_nli

  • Size of downloaded dataset files: 42.11 MB
  • Size of the generated dataset: 84.72 MB
  • Total amount of disk used: 126.85 MB

An example of 'train' looks as follows.


snli

  • Size of downloaded dataset files: 42.11 MB
  • Size of the generated dataset: 80.13 MB
  • Total amount of disk used: 122.25 MB

An example of 'train' looks as follows.


xnli

  • Size of downloaded dataset files: 42.11 MB
  • Size of the generated dataset: 1.56 MB
  • Total amount of disk used: 43.68 MB

An example of 'validation' looks as follows.


Data Fields

The data fields are the same among all splits.

multi_nli

  • premise: a string feature.
  • hypothesis: a string feature.
  • label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).

snli

  • premise: a string feature.
  • hypothesis: a string feature.
  • label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).

xnli

  • premise: a string feature.
  • hypothesis: a string feature.
  • label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).

Data Splits

multi_nli

train
multi_nli 392702

snli

train
snli 550152

xnli

validation test
xnli 2490 5010

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

The dataset is licensed under Creative Commons Attribution-ShareAlike license (CC BY-SA 4.0).

Citation Information

@article{ham2020kornli,
  title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
  author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
  journal={arXiv preprint arXiv:2004.03289},
  year={2020}
}

Contributions

Thanks to @thomwolf, @lhoestq, @lewtun, @patrickvonplaten for adding this dataset.