michal-stefanik commited on
Commit
041a62d
1 Parent(s): eb70321

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -2
README.md CHANGED
@@ -32,7 +32,88 @@ dataset_info:
32
  num_examples: 5571
33
  download_size: 231554886
34
  dataset_size: 1605837137
 
 
 
 
 
 
 
 
 
 
35
  ---
36
- # Dataset Card for "Canard_Wiki-augmented"
37
 
38
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  num_examples: 5571
33
  download_size: 231554886
34
  dataset_size: 1605837137
35
+ license: cc-by-sa-4.0
36
+ task_categories:
37
+ - question-answering
38
+ - conversational
39
+ - text2text-generation
40
+ language:
41
+ - en
42
+ pretty_name: Canard Wikipedia-augmented
43
+ size_categories:
44
+ - 10K<n<100K
45
  ---
46
+ # Dataset Card for Canard_Wiki-augmented
47
 
48
+ ### Summary
49
+
50
+ This is a dataset of fact-retrieving conversations about Wikipedia articles, with all responses grounded in a specific segment of text in the referenced Wikipedia article.
51
+ It is an extended version of [Canard](https://sites.google.com/view/qanta/projects/canard)
52
+ and [QuAC](https://huggingface.co/datasets/quac) datasets,
53
+ augmented with the contexts of [English Wikipedia](https://huggingface.co/datasets/wikipedia).
54
+
55
+ ### Supported Tasks
56
+
57
+ The dataset is intended to train a factually-consistent conversational model able to ground all its responses to the corresponding source(s).
58
+ However, the data can also be used to evaluate the information retrieval (IR) system for given queries, for contextual disambiguation of the queries from a conversation, etc.
59
+
60
+ ## Dataset Structure
61
+
62
+ The dataset can be loaded by simply choosing a split (`train` or `test`) and calling:
63
+
64
+ ```python
65
+ import datasets
66
+ canard_augm_test = datasets.load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
67
+
68
+ print(canard_augm_test[0]) # print the first sample
69
+ ```
70
+
71
+ ### Data Instances
72
+
73
+ The samples of Canard_Wiki-augmented have this format:
74
+
75
+ ```python
76
+ {'History': ['Anna Politkovskaya', 'The murder remains unsolved, 2016'],
77
+ 'QuAC_dialog_id': 'C_0aaa843df0bd467b96e5a496fc0b033d_1',
78
+ 'Question': 'Did they have any clues?',
79
+ 'Question_no': 1,
80
+ 'answer': 'Her colleagues at Novaya gazeta protested that until the instigator or sponsor of the crime was identified, arrested and prosecuted the case was not closed.'
81
+ 'Rewrite': 'Did investigators have any clues in the unresolved murder of Anna Politkovskaya?',
82
+ 'true_page_title': 'Anna Politkovskaya',
83
+ 'true_contexts': 'In September 2016 Vladimir Markin, official spokesman for (...)',
84
+ 'true_contexts_wiki': 'Anna Stepanovna Politkovskaya was a US-born Russian journalist (...)',
85
+ 'extractive': True
86
+ 'retrieved_contexts': ['Clues was an indie rock band from Montreal, Canada formed by Alden Penner (...)',
87
+ 'High Stakes is a British game show series hosted by Jeremy Kyle, in which (...)']
88
+ ```
89
+
90
+ ### Data Fields
91
+
92
+ * **History**: History of the conversation from Canard. The first two entries of the conversation are always synthetic.
93
+ * **QuAC_dialog_id**: Dialogue ID mapping the conversation to the original QuAC dataset (*dialogue_id* in QuAC).
94
+ * **Question**: Current question of the user from Canard.
95
+ * **Question_no**: Ordering of the user's question from the conversation, originally from Canard.
96
+ * **answer**: Correctly extracted answer to a given question from a relevant Wikipedia article (*true_contexts*). Note that some of the questions are open, thus the listed answer is not the only correct possibility.
97
+ * **Rewrite**: A rephrased version of *Question*, manually disambiguated from the context of *History* by the annotators of Canard.
98
+ * **true_page_title**: Title of the Wikipedia article containing *answer*. *wikipedia_page_title* from QuAC.
99
+ * **true_contexts**: An excerpt of the paragraph with an answer from the Wikipedia article titled *true_page_title*.
100
+ * **true_contexts_wiki**: A full contents of Wikipedia article (*text* from Wikipedia dataset), where *true_page_title* matches Wikipedia *title*. Note that the Wikipedia dataset was retrieved on 2nd of April, 2023.
101
+ * **extractive**: A flag whether the *answer* in this sample can be found as an exact-match in *true_contexts_wiki*.
102
+ * **retrieved_contexts**: "Distractor" contexts retrieved from the full Wikipedia dataset using the okapi-BM25 IR system on a **Rewrite** question.
103
+
104
+ ### Data Splits
105
+
106
+ * **train** split is aligned with the training splits of Canard and QuAC.
107
+ * **test** split matches the validation split of QuAC and the test split of Canard (where the conversation ids match).
108
+
109
+ ## Licensing
110
+
111
+ This dataset is composed of [QuAC](https://huggingface.co/datasets/quac) (MIT),
112
+ [Canard](https://sites.google.com/view/qanta/projects/canard) (CC BY-SA 4.0)
113
+ and [Wikipedia](https://huggingface.co/datasets/wikipedia) (CC BY SA 3.0).
114
+ Canard_Wiki-augmented is therefore licensed under CC BY-SA 4.0 as well, allowing it to be also commercially used.
115
+
116
+ ## Cite
117
+
118
+ If you use this dataset in a research, do not forget to cite the authors of original datasets, that Canard_Wiki-augmented is derived from:
119
+ [QuAC](https://huggingface.co/datasets/quac), [Canard](https://sites.google.com/view/qanta/projects/canard).