Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,405 Bytes
9936656
 
 
 
ea70233
59c284b
 
9936656
f1f5466
 
 
 
 
 
 
 
 
 
 
9936656
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1f5466
9936656
 
 
 
 
 
 
 
 
 
 
 
 
f1f5466
 
 
 
 
9936656
27be3ab
9936656
 
 
 
 
 
 
 
27be3ab
 
 
 
 
 
 
 
 
 
9936656
27be3ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9936656
 
 
 
 
 
f1f5466
 
 
 
 
 
9936656
dad57db
 
 
 
 
 
 
 
 
 
 
 
 
9936656
 
 
f1f5466
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
language:
- en
pretty_name: CANNOT
license: cc-by-sa-4.0
size_categories:
- 10K<n<100K
---

<p align="center"><img width="500" src="https://github.com/dmlls/cannot-dataset/assets/22967053/a380dfdf-3514-4771-90c4-636698d5043d" alt="CANNOT dataset"></p>
<p align="center" display="inline-block">
  <a href="https://github.com/dmlls/cannot-dataset/">
    <img src="https://img.shields.io/badge/version-1.1-green">
  </a>
</p>
<h1 align="center">Compilation of ANnotated, Negation-Oriented Text-pairs</h1>

---

# Dataset Card for CANNOT

## Dataset Description

- **Homepage: https://github.com/dmlls/cannot-dataset** 
- **Repository: https://github.com/dmlls/cannot-dataset** 
- **Paper: tba**

### Dataset Summary


**CANNOT** is a dataset that focuses on negated textual pairs. It currently
contains **77,376 samples**, of which roughly of them are negated pairs of
sentences, and the other half are not (they are paraphrased versions of each
other).

The most frequent negation that appears in the dataset is verbal negation (e.g.,
will → won't), although it also contains pairs with antonyms (cold → hot).

### Languages
CANNOT includes exclusively texts in **English**.

## Dataset Structure

The dataset is given as a
[`.tsv`](https://en.wikipedia.org/wiki/Tab-separated_values) file with the
following structure:

| premise     | hypothesis                                         | label |
|:------------|:---------------------------------------------------|:-----:|
| A sentence. | An equivalent, non-negated sentence (paraphrased). | 0     |
| A sentence. | The sentence negated.                              | 1     |


The dataset can be easily loaded into a Pandas DataFrame by running:

```Python
import pandas as pd

dataset = pd.read_csv('negation_dataset_v1.0.tsv', sep='\t')

```

## Dataset Creation

The dataset has been created by cleaning up and merging the following datasets:

1. _Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
    Negation_ (see
[`datasets/nan-nli`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/nan-nli)).

2. _GLUE Diagnostic Dataset_ (see
[`datasets/glue-diagnostic`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/glue-diagnostic)).

3. _Automated Fact-Checking of Claims from Wikipedia_ (see
[`datasets/wikifactcheck-english`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/wikifactcheck-english)).

4. _From Group to Individual Labels Using Deep Features_ (see
[`datasets/sentiment-labelled-sentences`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/sentiment-labelled-sentences)).
In this case, the negated sentences were obtained by using the Python module
[`negate`](https://github.com/dmlls/negate).

5. _It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With
Antonyms and Negation Using the New SemAntoNeg Benchmark_ (see
[`datasets/antonym-substitution`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/antonym-substitution)).

<br>

Once processed, the number of remaining samples in each of the datasets above are:

| Dataset                                                                   | Samples    |
|:--------------------------------------------------------------------------|-----------:|
| Not another Negation Benchmark                                            |      118   |
| GLUE Diagnostic Dataset                                                   |      154   |
| Automated Fact-Checking of Claims from Wikipedia                          |   14,970   |
| From Group to Individual Labels Using Deep Features                       |    2,110   |
| It Is Not Easy To Detect Paraphrases                                      |    8,597   |
| <div align="right"><b>Total</b></div>                                     | **25,949** |

<br>

Additionally, for each of the negated samples, another pair of non-negated
sentences has been added by paraphrasing them with the pre-trained model
[`🤗tuner007/pegasus_paraphrase`](https://huggingface.co/tuner007/pegasus_paraphrase).

Finally, the swapped version of each pair (premise ⇋ hypothesis) has also been
included, and any duplicates have been removed.

With this, the number of premises/hypothesis in the CANNOT dataset that appear
in the original datasets are:

| <div align="left"><b>Dataset</b></div>                                                                   | <div align="center"><b>Sentences</b></div>             |
|:--------------------------------------------------------------------------|----------------------:|
| Not another Negation Benchmark                                            |         552 &nbsp;&nbsp;&nbsp; (0.36 %) |
| GLUE Diagnostic Dataset                                                   |         586 &nbsp;&nbsp;&nbsp; (0.38 %) |
| Automated Fact-Checking of Claims from Wikipedia                          |      89,728 &nbsp; (59.98 %) |
| From Group to Individual Labels Using Deep Features                       |      12,626 &nbsp;&nbsp;&nbsp; (8.16 %) |
| It Is Not Easy To Detect Paraphrases                                      |      17,198 &nbsp; (11.11 %) |
| <div align="right"><b>Total</b></div>                                     | **120,690** &nbsp; (77.99 %) |

The percentages above are in relation to the total number of premises and
hypothesis in the CANNOT dataset. The remaining 22.01 % (34,062 sentences) are
the novel premises/hypothesis added through paraphrase and rule-based negation.


## Additional Information

### Licensing Information

The CANNOT dataset is released under [CC BY-SA
4.0](https://creativecommons.org/licenses/by-sa/4.0/).

<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
    <img alt="Creative Commons License" width="100px" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"/>
</a>

### Citation 
Please cite our [INLG 2023 paper](https://arxiv.org/abs/2307.13989), if you use our dataset. 
**BibTeX:**
```bibtex
@misc{anschütz2023correct,
      title={This is not correct! Negation-aware Evaluation of Language Generation Systems}, 
      author={Miriam Anschütz and Diego Miguel Lozano and Georg Groh},
      year={2023},
      eprint={2307.13989},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

### Contributions

Contributions to the dataset can be submitted through the [project
repository](https://github.com/dmlls/cannot-dataset).