Datasets:
ibm
/

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 13,999 Bytes
c1b28c4
 
 
 
 
 
 
 
 
121530c
 
 
 
 
 
 
9c89a17
c1b28c4
2298643
00e9141
2298643
121530c
b3855cd
9372eb2
121530c
2298643
3eb27d1
c1b28c4
9c89a17
c1b28c4
8c0814d
71309a7
 
c1b28c4
 
 
 
 
 
 
9c89a17
c1b28c4
 
 
 
 
 
 
 
 
8634e17
c1b28c4
 
 
77a2f5b
ab1587a
77a2f5b
c1b28c4
 
 
 
 
 
 
 
 
 
 
2d09b9e
 
 
327a968
9372eb2
2d09b9e
 
 
 
 
327a968
9372eb2
2d09b9e
 
 
 
 
 
 
 
c1b28c4
 
 
 
 
 
 
 
 
 
c09acd7
71309a7
e391074
c1b28c4
8c0814d
 
 
 
 
 
 
 
1a16344
8c0814d
1a16344
8c0814d
c1b28c4
8c0814d
c1b28c4
 
8b8d3c2
 
48aab24
8b8d3c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48aab24
 
8b8d3c2
48aab24
8b8d3c2
 
 
 
 
c1b28c4
 
 
 
 
 
d330815
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d09b9e
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fae36e5
c1b28c4
 
8634e17
c1b28c4
 
 
8b8d3c2
 
c1b28c4
 
8b8d3c2
8634e17
 
 
 
 
 
8b8d3c2
c1b28c4
 
 
cc673f2
c1b28c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
---
license: mit
language:
- en
size_categories:
- n<1K
task_categories:
- question-answering
---

<style>
H1{color:Blue !important;}
H2{color:DarkOrange !important;}
p{color:Black !important;}
</style>

# Wikipedia contradict benchmark

<!-- Provide a quick summary of the dataset. -->  


<p align="center">
      <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Example.png?raw=true" width=80%/>
<!--  <img src="./figs/Example.png" width=70%/> -->
</p>



Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.

<!--  Note that, in the dataset viewer, there are 130 valid-tag instances, but each instance can contain more that one question and its respective two answers. Then, the total number of questions and answers is 253. -->

<!-- This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). --> 

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

Wikipedia contradict benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts. 

Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.

- **Curated by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. All authors are employed by IBM Research.
<!-- - **Funded by [optional]:** There was no associated grant. -->
- **Shared by:** Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri. 
- **Language(s) (NLP):** English.
- **License:** MIT.

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

<!-- - **Repository:** [More Information Needed] -->
- **Paper:** https://arxiv.org/abs/2406.13805
<!-- - **Demo [optional]:** [More Information Needed] -->

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

The dataset has been used in the paper to assess LLMs performance when augmented with retrieved passages containing real-world knowledge conflicts.

The following figure illustrates the evaluation process:

<p align="center">
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Evaluation.png?raw=true" width=70%/>
<!--  <img src="./figs/Evaluation.png" width=70%/> -->
</p>

And the following table shows the performance of five LLMs (Mistral-7b-inst, Mixtral-8x7b-inst, Llama-2-70b-chat, Llama-3-70b-inst, and GPT-4) on the Wikipedia Contradict Benchmark based on rigorous human evaluations on a subset of answers for 55 instances, which corresponds to 1,375 LLM responses in total.

<p align="center">
  <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/table2.png?raw=true" width=70%/> 
  <!-- <img src="./figs/table2.png" width=70%/> -->
</p>

Notes: “C”, “PC” and “IC” stand for “Correct”, “Partially correct”, “Incorrect”, respectively. “all”, “exp”, and “imp” represent for instance
types: all instances, instances with explicit conflicts, and instances with implicit conflicts. The
numbers represent the ratio of responses from each LLM that were assessed as “Correct, “Partially
correct, or “Incorrect for each instance type under a prompt template. The bold numbers highlight
the best models that correctly answer questions for each type and prompt template.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

N/A.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Wikipedia contradict benchmark is given in CSV format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.

The description of each field (when the instance contains two questions) is as follows:


- **question_ID:** ID of question.
- **question:** Question nferred from the contradiction.
- **context1:** Decontextualized relevant information of context1.
- **context2:** Decontextualized relevant information of context2.
- **answer1:** Gold answer to question according to context1.
- **answer2:** Gold answer to question according to context2.
- **contradictType:** It focuses on the reasoning aspect. It describes whether the contraction is explicit or implicit (Explicit/Implicit). Implicit contradiction requires some reasoning to understand why context1 and context2 are contradicted. 
- **samepassage:** It focuses on the source the contradiction. It describes whether context1 and context2 are the same or not.
- **merged_context:** context1 and context2 merged in a single paragraph ("context1. context2").
- **ref_answer:** answer1 and answer2 merged in a single paragraph ("answer1|answer2").
- **WikipediaArticleTitle:** Title of article.
- **url:** URL of article.



## Usage of the Dataset

We provide the following starter code. Please refer to the [GitHub repository](https://github.com/) for more information about the functions ```load_testingdata``` and ```generateAnswers_bam_models```.


```python
from genai import Client, Credentials
import datetime
import pytz
import logging
import json
import copy
from dotenv import load_dotenv
from genai.text.generation import CreateExecutionOptions
from genai.schema import (
    DecodingMethod,
    LengthPenalty,
    ModerationParameters,
    ModerationStigma,
    TextGenerationParameters,
    TextGenerationReturnOptions,
)

try:
    from tqdm.auto import tqdm
except ImportError:
    print("Please install tqdm to run this example.")
    raise

load_dotenv()
client = Client(credentials=Credentials.from_env())

logging.getLogger("bampy").setLevel(logging.DEBUG)
fh = logging.FileHandler('bampy.log')
fh.setLevel(logging.DEBUG)
logging.getLogger("bampy").addHandler(fh)

parameters = TextGenerationParameters(
    max_new_tokens=250,
    min_new_tokens=1,
    decoding_method=DecodingMethod.GREEDY,
    return_options=TextGenerationReturnOptions(
        # if ordered is False, you can use return_options to retrieve the corresponding prompt
        input_text=True,
    ),
)


# load dataset
testingUnits = load_testingdata()
# test LLMs models
generateAnswers_bam_models(testingUnits)
```



## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of Wikipedia Contradict Benchmark is to comprehensively evaluate LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

The data was mostly observable as raw text. The raw data was retrieved from Wikipedia articles containing inconsistent, self-contradictory, and contradict-other tags. The first two tags denote contradictory statements within the same article, whereas the third tag highlights instances where the content of one article contradicts that of another article. In total, we collected around 1,200 articles that contain these tags through the Wikipedia maintenance category “Wikipedia articles with content issues”. Given a content inconsistency tag provided by Wikipedia editors, the annotators verified whether the tag is valid by checking the relevant article content, the editor’s comment, as well as the information in the edit history and the article’s talk page if necessary.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

Wikipedia contributors.

### Annotations

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

#### Annotation process

<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

The annotation interface was developed using [Label Studio](https://labelstud.io/).

The annotators were required to slightly modify the original passages to make them stand-alone (decontextualization). Normally, this requires resolving the coreference anaphors or the bridging anaphors in the first sentence (see annotation guidelines). In Wikipedia, oftentimes the antecedents for these anaphors are the article titles themselves.

For further information, see the annotation guidelines of the paper.

#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->

Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

N/A.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Each annotation instance contains at least one question and two possible answers, but some instances may contain more than one question (and the corresponding two possible answers for each question). Some instances may not contain a value for **paragraphA_clean**, **tagDate**, and **tagReason**.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Our data is downloaded from Wikipedia. As such, the data is biased towards the original content and sources. Given that human data annotation involves some degree of subjectivity we created a comprehensive 17-page annotation guidelines document to clarify important cases during the annotation process.  The annotators were explicitly instructed not to take their personal feeling about the particular topic. Nevertheless, some degree of intrinsic subjectivity might have impacted the techniques picked up by the annotators during the annotation.

Since our dataset requires manual annotation, annotation noise is inevitably introduced. 


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

If this dataset is utilized in your research, kindly cite the following paper:

**BibTeX:**

```
@article{hou2024wikicontradict,
  title={{WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia}},
  author={Hou, Yufang and Pascale, Alessandra and Carnerero-Cano, Javier and Tchrakian, Tigran and Marinescu, Radu and Daly, Elizabeth and Padhi, Inkit and Sattigeri, Prasanna},
  journal={arXiv preprint arXiv:2406.13805},
  year={2024}
}
```

**APA:**

Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E., Padhi, I., & Sattigeri, P. (2024). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. *arXiv preprint arXiv:2406.13805*.

<!-- ## Glossary [optional] -->

<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->

<!-- [More Information Needed] --|

<!-- ## More Information [optional] -->

<!-- [More Information Needed] -->

## Dataset Card Authors

Yufang Hou, Alessandra Pascale, Javier Carnerero-Cano, Tigran Tchrakian, Radu Marinescu, Elizabeth Daly, Inkit Padhi, and Prasanna Sattigeri.

## Dataset Card Contact

Yufang Hou (yhou@ie.ibm.com), Alessandra Pascale (apascale@ie.ibm.com), Javier Carnerero-Cano (javier.cano@ibm.com), Tigran Tchrakian (tigran@ie.ibm.com), Radu Marinescu (radu.marinescu@ie.ibm.com), Elizabeth Daly (elizabeth.daly@ie.ibm.com), Inkit Padhi (inkpad@ibm.com), and Prasanna Sattigeri (psattig@us.ibm.com).