Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 9,595 Bytes
88b15b1
 
1e67e6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88b15b1
1e67e6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84ea7e5
1e67e6e
 
 
84ea7e5
1e67e6e
84ea7e5
1e67e6e
84ea7e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9f7db8
84ea7e5
1e67e6e
 
 
 
 
84ea7e5
 
 
 
 
 
 
 
 
 
1e67e6e
 
84ea7e5
 
 
 
1e67e6e
 
 
84ea7e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e67e6e
 
 
 
 
84ea7e5
1e67e6e
 
 
84ea7e5
 
 
1e67e6e
 
 
 
 
84ea7e5
1e67e6e
 
 
84ea7e5
1e67e6e
 
 
 
 
84ea7e5
 
 
1e67e6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
  features:
  - name: question1
    dtype: string
  - name: question2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': not_duplicate
          '1': duplicate
  - name: idx
    dtype: int32
  splits:
  - name: train.biased
    num_bytes: 42391456
    num_examples: 297735
  - name: train.anti_biased
    num_bytes: 8509364
    num_examples: 66111
  - name: validation.biased
    num_bytes: 4698206
    num_examples: 32968
  - name: validation.anti_biased
    num_bytes: 955548
    num_examples: 7462
  download_size: 70726976
  dataset_size: 56554574
- config_name: partial_input
  features:
  - name: question1
    dtype: string
  - name: question2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': not_duplicate
          '1': duplicate
  - name: idx
    dtype: int32
  splits:
  - name: train.biased
    num_bytes: 42788212
    num_examples: 297735
  - name: train.anti_biased
    num_bytes: 8112608
    num_examples: 66111
  - name: validation.biased
    num_bytes: 4712327
    num_examples: 33084
  - name: validation.anti_biased
    num_bytes: 941427
    num_examples: 7346
  download_size: 70726976
  dataset_size: 56554574
task_categories:
- text-classification
language:
- en
pretty_name: Quora Questions Pairs
---


# Dataset Card for Bias-amplified Splits for QQP

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Citation Information](#citation-information)

## Dataset Description

- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [GLUE](https://arxiv.org/abs/1804.07461)

### Dataset Summary

Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.

Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.

Here we apply our framework to the Quora Question Pairs dataset (QQP), a dataset composed of question pairs where the task is to determine if the questions are paraphrases of each other (have the same meaning).

Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.

#### Evaluation Results (DeBERTa-large)

##### For splits based on minority examples:

| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split   | 93.0          | 77.6             |
| Biased training split     | 87.0          | 36.8             |

##### For splits based on partial-input model:

| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split   | 93.0          | 81.3             |
| Biased training split     | 90.3          | 63.9             |

#### Loading the Data

```
from datasets import load_dataset

# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/qqp", "minority_examples")

# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```

## Dataset Structure

### Data Instances

Data instances are taken directly from QQP (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
  "idx": 56,
  "question1": "How do I buy used car in India?",
  "question2": "Which used car should I buy in India?",
  "label": 0
}
```
 
### Data Fields

- `idx`: unique identifier for the example within its original data splits (e.g., validation set) 
- `question1`: a question asked on Quora 
- `question2`: a question asked on Quora
- `label`: one of `0` and `1` (`not duplicate` and `duplicate`)

### Data Splits

Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.

Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.

#### Minority Examples

| Dataset Split            | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased           | 297735                       |
| Train - anti-biased      | 66111                        |
| Validation - biased      | 32968                        |
| Validation - anti-biased | 7462                         |

#### Partial-input Baselines

| Dataset Split            | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased           | 297735                       |
| Train - anti-biased      | 66111                        |
| Validation - biased      | 33084                        |
| Validation - anti-biased | 7346                         |

## Dataset Creation

### Curation Rationale

NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.

### Annotations

#### Annotation process

No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.

## Considerations for Using the Data

### Social Impact of Dataset

Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.

### Discussion of Biases

We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.

## Additional Information

### Dataset Curators

Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).

QQP data was released by Quora and released under the GLUE benchmark.

### Citation Information

```
@misc{reif2023fighting,
    title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
    author = "Yuval Reif and Roy Schwartz",
    month = may,
    year = "2023",
    url = "https://arxiv.org/pdf/2305.18917",
}
```

Source dataset:
```
@inproceedings{wang2019glue,
  title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
  author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
  note={In the Proceedings of ICLR.},
  year={2019}
}
```