nli-mix-zero-shot / README.md
AntoineBlanot's picture
Update README.md
e6a4e01 verified
---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: task_name
dtype: string
- name: label_name
dtype: string
splits:
- name: train
num_bytes: 317612708
num_examples: 1018733
- name: test
num_bytes: 15622840
num_examples: 59140
download_size: 212682398
dataset_size: 333235548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# NLI Mix Zero-Shot
This dataset is a single dataset entry point for the following train and test datasets:
- train: [MoritzLaurer/dataset_train_nli](https://huggingface.co/datasets/MoritzLaurer/dataset_train_nli)
- test: [MoritzLaurer/dataset_test_concat_nli](https://huggingface.co/datasets/MoritzLaurer/dataset_test_concat_nli)
Datasets consists of a mixture of text classification datasets using the NLI (Natural Language Inference) format.
It can be use ot train a powerful Zero-Shot Text Classification (ZS-TC) model.
For more details on the creation of the dataset (datasets used, datasets format, datasets cleaning, ...) please refer to the page of each dataset.
All the credits goes to [MoritzLaurer](https://huggingface.co/MoritzLaurer).
Thank you for your hard work and sharing it with the community!