system HF staff commited on
Commit
21a8af3
1 Parent(s): 7fbf41b

Update files from the datasets library (from 1.2.1)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.1

Files changed (1) hide show
  1. README.md +45 -23
README.md CHANGED
@@ -46,23 +46,22 @@ task_ids:
46
 
47
  ## Dataset Description
48
 
49
- - **Homepage:** https://research.google/tools/datasets/xsum-hallucination-annotations/
50
- - **Repository:** https://github.com/google-research-datasets/xsum_hallucination_annotations
51
- - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.173.pdf
52
- - **Leaderboard:** NA
53
  - **Point of Contact:** [xsum-hallucinations-acl20@google.com](mailto:xsum-hallucinations-acl20@google.com)
54
 
55
  ### Dataset Summary
56
 
57
- Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. The dataset has crowdsourced 3 judgements for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community.
58
 
59
  ### Supported Tasks and Leaderboards
60
 
61
- [More Information Needed]
62
 
63
  ### Languages
64
 
65
- [More Information Needed]
66
 
67
  ## Dataset Structure
68
 
@@ -70,6 +69,10 @@ Neural abstractive summarization models are highly prone to hallucinate content
70
 
71
  ##### Faithfulness annotations dataset
72
 
 
 
 
 
73
  ```
74
  {
75
  'bbcid': 34687720,
@@ -84,6 +87,10 @@ Neural abstractive summarization models are highly prone to hallucinate content
84
 
85
  ##### Factuality annotations dataset
86
 
 
 
 
 
87
  ```
88
  {
89
  'bbcid': 29911712,
@@ -101,14 +108,14 @@ Neural abstractive summarization models are highly prone to hallucinate content
101
  Raters are shown the news article and the system summary, and are tasked with identifying and annotating the spans that aren't supported by the input article. The file contains the following columns:
102
 
103
 
104
- - bbcid: Document id in the XSum corpus.
105
- - system: Name of neural summarizer.
106
- - summary: Summary generated by ‘system’.
107
- - hallucination_type: Type of hallucination: intrinsic (0) or extrinsic (1)
108
- - hallucinated_span: Hallucinated span in the ‘summary’.
109
- - hallucinated_span_start: Index of the start of the hallucinated span.
110
- - hallucinated_span_end: Index of the end of the hallucinated span.
111
- - worker_id: 'wid_0', 'wid_1', 'wid_2'
112
 
113
 
114
  The `hallucination_type` column has NULL value for some entries which have been replaced iwth `-1`.
@@ -118,18 +125,23 @@ The `hallucination_type` column has NULL value for some entries which have been
118
  Raters are shown the news article and the hallucinated system summary, and are tasked with assessing the summary whether it is factual or not. The file contains the following columns:
119
 
120
 
121
- - bbcid: Document id in the XSum corpus.
122
- - system: Name of neural summarizer.
123
- - summary: Summary generated by ‘system’.
124
- - is_factual: yes (1) or no (0)
125
- - worker_id: 'wid_0', 'wid_1', 'wid_2'
126
 
127
 
128
  The `is_factual` column has NULL value for some entries which have been replaced iwth `-1`.
129
 
130
  ### Data Splits
131
 
132
- [More Information Needed]
 
 
 
 
 
133
 
134
  ## Dataset Creation
135
 
@@ -183,8 +195,18 @@ The `is_factual` column has NULL value for some entries which have been replaced
183
 
184
  ### Licensing Information
185
 
186
- [More Information Needed]
187
 
188
  ### Citation Information
189
 
190
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ## Dataset Description
48
 
49
+ - **Homepage:** [XSUM Hallucination Annotations Homepage](https://research.google/tools/datasets/xsum-hallucination-annotations/)
50
+ - **Repository:** [XSUM Hallucination Annotations Homepage](https://github.com/google-research-datasets/xsum_hallucination_annotations)
51
+ - **Paper:** [ACL Web](https://www.aclweb.org/anthology/2020.acl-main.173.pdf)
 
52
  - **Point of Contact:** [xsum-hallucinations-acl20@google.com](mailto:xsum-hallucinations-acl20@google.com)
53
 
54
  ### Dataset Summary
55
 
56
+ Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. This dataset contains a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. The dataset has crowdsourced 3 judgements for each of 500 x 5 document-system pairs. This will be a valuable resource to the abstractive summarization community.
57
 
58
  ### Supported Tasks and Leaderboards
59
 
60
+ * `summarization`: : The dataset can be used to train a model for Summarization,, which consists in summarizing a given document. Success on this task is typically measured by achieving a *high/low* [ROUGE Score](https://huggingface.co/metrics/rouge).
61
 
62
  ### Languages
63
 
64
+ The text in the dataset is in English which are abstractive summaries for the [XSum dataset](https://www.aclweb.org/anthology/D18-1206.pdf). The associated BCP-47 code is `en`.
65
 
66
  ## Dataset Structure
67
 
 
69
 
70
  ##### Faithfulness annotations dataset
71
 
72
+ A typical data point consists of an ID referring to the news article(complete document), summary, and the hallucination span information.
73
+
74
+ An example from the XSum Faithfulness dataset looks as follows:
75
+
76
  ```
77
  {
78
  'bbcid': 34687720,
 
87
 
88
  ##### Factuality annotations dataset
89
 
90
+ A typical data point consists of an ID referring to the news article(complete document), summary, and whether the summary is factual or not.
91
+
92
+ An example from the XSum Factuality dataset looks as follows:
93
+
94
  ```
95
  {
96
  'bbcid': 29911712,
 
108
  Raters are shown the news article and the system summary, and are tasked with identifying and annotating the spans that aren't supported by the input article. The file contains the following columns:
109
 
110
 
111
+ - `bbcid`: Document id in the XSum corpus.
112
+ - `system`: Name of neural summarizer.
113
+ - `summary`: Summary generated by ‘system’.
114
+ - `hallucination_type`: Type of hallucination: intrinsic (0) or extrinsic (1)
115
+ - `hallucinated_span`: Hallucinated span in the ‘summary’.
116
+ - `hallucinated_span_start`: Index of the start of the hallucinated span.
117
+ - `hallucinated_span_end`: Index of the end of the hallucinated span.
118
+ - `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2')
119
 
120
 
121
  The `hallucination_type` column has NULL value for some entries which have been replaced iwth `-1`.
 
125
  Raters are shown the news article and the hallucinated system summary, and are tasked with assessing the summary whether it is factual or not. The file contains the following columns:
126
 
127
 
128
+ - `bbcid1: Document id in the XSum corpus.
129
+ - `system`: Name of neural summarizer.
130
+ - `summary`: Summary generated by ‘system’.
131
+ - `is_factual`: Yes (1) or No (0)
132
+ - `worker_id`: Worker ID (one of 'wid_0', 'wid_1', 'wid_2')
133
 
134
 
135
  The `is_factual` column has NULL value for some entries which have been replaced iwth `-1`.
136
 
137
  ### Data Splits
138
 
139
+ There is only a single split for both the Faithfulness annotations dataset and Factuality annotations dataset.
140
+
141
+ | | Tain |
142
+ | ------------------------ | ----- |
143
+ | Faithfulness annotations | 11185 |
144
+ | Factuality annotations | 5597 |
145
 
146
  ## Dataset Creation
147
 
 
195
 
196
  ### Licensing Information
197
 
198
+ [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
199
 
200
  ### Citation Information
201
 
202
+ ```
203
+ @InProceedings{maynez_acl20,
204
+ author = "Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan Thomas Mcdonald",
205
+ title = "On Faithfulness and Factuality in Abstractive Summarization",
206
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
207
+ year = "2020",
208
+ pages = "1906--1919",
209
+ address = "Online",
210
+ }
211
+ ```
212
+