Datasets:
ibm
/

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,7 +14,7 @@ H2{color:DarkOrange !important;}
14
  p{color:Black !important;}
15
  </style>
16
 
17
- # Wikipedia Contradict Benchmark
18
 
19
  <!-- Provide a quick summary of the dataset. -->
20
 
@@ -25,7 +25,7 @@ p{color:Black !important;}
25
 
26
 
27
 
28
- Wikipedia Contradict Benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
29
 
30
  This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
31
 
@@ -35,7 +35,7 @@ This dataset card has been generated using [this raw template](https://github.co
35
 
36
  <!-- Provide a longer summary of what this dataset is. -->
37
 
38
- Wikipedia Contradict Benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts.
39
 
40
  Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
41
 
@@ -91,7 +91,7 @@ N/A.
91
 
92
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
93
 
94
- Wikipedia Contradict Benchmark is given in JSON format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
95
  The description of each key (when the instance contains two questions) is as follows:
96
 
97
  - **title:** Title of article.
 
14
  p{color:Black !important;}
15
  </style>
16
 
17
+ # Wikipedia contradict benchmark
18
 
19
  <!-- Provide a quick summary of the dataset. -->
20
 
 
25
 
26
 
27
 
28
+ Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
29
 
30
  This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
31
 
 
35
 
36
  <!-- Provide a longer summary of what this dataset is. -->
37
 
38
+ Wikipedia contradict benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts.
39
 
40
  Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
41
 
 
91
 
92
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
93
 
94
+ Wikipedia contradict benchmark is given in JSON format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
95
  The description of each key (when the instance contains two questions) is as follows:
96
 
97
  - **title:** Title of article.