File size: 5,164 Bytes
8e0a593
3e362c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e07bf43
3e362c0
 
e07bf43
3e362c0
 
e07bf43
3e362c0
e07bf43
8e0a593
3e362c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3479605
3e362c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e07bf43
 
 
3e362c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
annotations_creators:
- machine-generated
- manual-partial-validation
language_creators:
- expert-generated
language:
- id
license: unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- IDK-MRC
task_categories:
- text-classification
task_ids:
- natural-language-inference
pretty_name: IDK-MRC-NLI
dataset_info:
  features:
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  config_name: idkmrc-nli
  splits:
  - name: train
    num_bytes: 5916125
    num_examples: 18664
  - name: validation
    num_bytes: 473125
    num_examples: 1528
  - name: test
    num_bytes: 521375
    num_examples: 1688
  download_size: 6910625
  dataset_size: 21880
---

# Dataset Card for IDK-MRC-NLI

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Repository:** [Hugging Face](https://huggingface.co/datasets/muhammadravi251001/idkmrc-nli)
- **Point of Contact:** [Hugging Face](https://huggingface.co/datasets/muhammadravi251001/idkmrc-nli)
- **Experiment:** [Github](https://github.com/muhammadravi251001/multilingual-qas-with-nli)

### Dataset Summary

The IDKMRC-NLI dataset is derived from the IDK-MRC question answering dataset, utilizing named entity recognition (NER), chunking tags, Regex, and embedding similarity techniques to determine its contradiction sets. 
Collected through this process, the dataset comprises various columns beyond premise, hypothesis, and label, including properties aligned with NER and chunking tags. 
This dataset is designed to facilitate Natural Language Inference (NLI) tasks and contains information extracted from diverse sources to provide comprehensive coverage. 
Each data instance encapsulates premise, hypothesis, label, and additional properties pertinent to NLI evaluation.

### Supported Tasks and Leaderboards

- Natural Language Inference for Indonesian

### Languages

Indonesian

## Dataset Structure

### Data Instances

An example of `test` looks as follows.

```
{
  "premise": "Karangkancana adalah sebuah kecamatan di Kabupaten Kuningan, Provinsi Jawa Barat, Indonesia.", 
  "hypothesis": "Dimanakah letak Desa Karang kancana? Kabupaten Kuningan, Provinsi Jawa Barat, Indonesia.", 
  "label": 0
}
```
### Data Fields

The data fields are:
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).

### Data Splits #TODO

The data is split across `train`, `valid`, and `test`. 

|   split   | # examples |
|----------|-------:|
|train|   18664|
|valid|   1528|
|test|   1688|

## Dataset Creation

### Curation Rationale

Indonesian NLP is considered under-resourced. We need NLI dataset to fine-tuning the NLI model to utilizing them for QA models in order to improving the performance of the QA's.

### Source Data

#### Initial Data Collection and Normalization

We collect the data from the prominent QA dataset in Indonesian. The annotation fully by the original dataset's researcher.

#### Who are the source language producers?

This synthetic data was produced by machine, but the original data was produced by human.

### Personal and Sensitive Information

There might be some personal information coming from Wikipedia and news, especially the information of famous/important people.

## Considerations for Using the Data

### Discussion of Biases

The QA dataset (so the NLI-derived from them) is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias.

### Other Known Limitations

No other known limitations

## Additional Information

### Dataset Curators

This dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, Mohamed bin Zayed University of Artificial Intelligence, and the Korea Advanced Institute of Science & Technology.

### Licensing Information

The license is Unknown. Please contact authors for any information on the dataset.