File size: 1,319 Bytes
7c86919
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6a4e01
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
dataset_info:
  features:
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: task_name
    dtype: string
  - name: label_name
    dtype: string
  splits:
  - name: train
    num_bytes: 317612708
    num_examples: 1018733
  - name: test
    num_bytes: 15622840
    num_examples: 59140
  download_size: 212682398
  dataset_size: 333235548
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
# NLI Mix Zero-Shot

This dataset is a single dataset entry point for the following train and test datasets:
- train: [MoritzLaurer/dataset_train_nli](https://huggingface.co/datasets/MoritzLaurer/dataset_train_nli)
- test: [MoritzLaurer/dataset_test_concat_nli](https://huggingface.co/datasets/MoritzLaurer/dataset_test_concat_nli)

Datasets consists of a mixture of text classification datasets using the NLI (Natural Language Inference) format.
It can be use ot train a powerful Zero-Shot Text Classification (ZS-TC) model.

For more details on the creation of the dataset (datasets used, datasets format, datasets cleaning, ...) please refer to the page of each dataset.

All the credits goes to [MoritzLaurer](https://huggingface.co/MoritzLaurer).
Thank you for your hard work and sharing it with the community!