adithya7 commited on
Commit
de6d8b6
1 Parent(s): db6b7a7

add dataset files, loading script

Browse files
Files changed (6) hide show
  1. README.md +196 -0
  2. background-summaries.py +123 -0
  3. events.tar.gz +3 -0
  4. splits/dev.txt +3 -0
  5. splits/test.txt +8 -0
  6. splits/train.txt +3 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-nc-4.0
5
+ tags:
6
+ - summarization
7
+ - event-summarization
8
+ - background-summarization
9
+ annotations_creators:
10
+ - expert-generated
11
+ language_creators:
12
+ - expert-generated
13
+ pretty_name: Background Summarization
14
+ size_categories:
15
+ - 1K<n<10K
16
+ source_datasets:
17
+ - Timeline17
18
+ - Crisis
19
+ - SocialTimeline
20
+ task_categories:
21
+ - summarization
22
+ ---
23
+
24
+ # Dataset Card for Background Summarization of Event Timelines
25
+
26
+ This dataset provides background text summaries for news events timelines.
27
+
28
+ ## Dataset Details
29
+
30
+ ### Dataset Description
31
+
32
+ Generating concise summaries of news events is a challenging natural language processing task. While journalists often curate timelines to highlight key sub-events, newcomers to a news event face challenges in catching up on its historical context. This dataset addresses this need by introducing the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events. This dataset includes human-annotated backgrounds for 14 major news events from 2005--2014.
33
+
34
+ - **Curated by:** Adithya Pratapa, Kevin Small, Markus Dreyer
35
+ - **Language(s) (NLP):** English
36
+ - **License:** CC-BY-NC-4.0
37
+
38
+ ### Dataset Sources
39
+
40
+ <!-- Provide the basic links for the dataset. -->
41
+
42
+ - **Repository:** https://github.com/amazon-science/background-summaries
43
+ - **Paper:** https://arxiv.org/abs/2310.16197
44
+
45
+ ## Uses
46
+
47
+ <!-- Address questions around how the dataset is intended to be used. -->
48
+
49
+ ### Direct Use
50
+
51
+ <!-- This section describes suitable use cases for the dataset. -->
52
+
53
+ This dataset can be used for training text summarization systems. The trained systems would be capable of generating background (historical context) to a news update. To generate the background, the system takes past news updates as input.
54
+
55
+ ### Out-of-Scope Use
56
+
57
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
58
+
59
+ Systems trained on this dataset might not perform as expected on domains other than newswire. To avoid factual errors, system-generated summaries should be verified by experts before deploying in real-world.
60
+
61
+ ## Dataset Structure
62
+
63
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
64
+
65
+ ### Dataset Fields
66
+
67
+ | Field | Name | Description |
68
+ | :--- | :--- | :--- |
69
+ | src | Source | Concatenated string of all the previous updates. Each update text includes the publication date. |
70
+ | z | Guidance | Update text for the current timestep. |
71
+ | tgt | Target | Background text for the current timestep. |
72
+
73
+ ### Data Splits
74
+
75
+ An overview of the major events and their splits in this dataset. The last column provides the statistics for background annotations provided in this dataset.
76
+
77
+ | Split | Major event | Sources (# timelines) | Time period | # updates | len(updates) | len(background) |
78
+ | :--- | :--- | ---: | ---: | ---: | ---: | ---: |
79
+ | Train | Swine flu | T17 (3) | 2009 | 21 | 52 | 45 |
80
+ | Train | Financial crisis | T17 (1) | 2008 | 65 | 115 | 147 |
81
+ | Train | Iraq war | T17 (1) | 2005 | 155 | 41 | 162 |
82
+ | Validation | Haitian earthquake | T17 (1) | 2010 | 11 | 100 | 61 |
83
+ | Validation | Michael Jackson death | T17 (1) | 2009--2011 | 37 | 36 | 164 |
84
+ | Validation | BP oil spill | T17 (5) | 2010--2012 | 118 | 56 | 219 |
85
+ | Test | NSA leak | SocialTimeline (1) | 2014 | 29 | 45 | 50 |
86
+ | Test | Gaza conflict | SocialTimeline (1) | 2014 | 38 | 183 | 263 |
87
+ | Test | MH370 flight disappearance | SocialTimeline (1) | 2014 | 39 | 39 | 127 |
88
+ | Test | Yemen crisis | Crisis (6) | 2011--2012 | 81 | 30 | 125 |
89
+ | Test | Russian-Ukraine conflict | SocialTimeline (3) | 2014 | 86 | 112 | 236 |
90
+ | Test | Libyan crisis | T17 (2); Crisis (7) | 2011 | 118 | 38 | 177 |
91
+ | Test | Egyptian crisis | T17 (1); Crisis (4) | 2011--2013 | 129 | 34 | 187 |
92
+ | Test | Syrian crisis | T17 (4); Crisis (5) | 2011--2013 | 164 | 30 | 162 |
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ <!-- Motivation for the creation of this dataset. -->
99
+
100
+ Readers often find it difficult to keep track of complex news events. A background summary that provides sufficient historical context can help improve the reader's understanding of a news update. This dataset provides human-annotated backgrounds for development and evaluation of background summarization systems.
101
+
102
+ ### Source Data
103
+
104
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
105
+
106
+ #### Data Collection and Processing
107
+
108
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
109
+
110
+ This dataset is built upon three popular news timeline summarization datasets, Timeline17 ([Binh Tran et al., 2013](https://dl.acm.org/doi/10.1145/2487788.2487829)), Crisis ([Tran et al., 2015](https://link.springer.com/chapter/10.1007/978-3-319-16354-3_26)), and Social Timeline ([Wang et al., 2015](https://aclanthology.org/N15-1112/)).
111
+
112
+ #### Who are the source data producers?
113
+
114
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
115
+
116
+ __Timeline17:__ compiled from an ensemble of news websites, this dataset provides 17 timelines spanning 9 major events from 2005--2013.
117
+
118
+ __Crisis:__ a follow-up to the Timeline17 dataset, this covers 25 timelines spanning 4 major events. While it mostly covers a subset of events from Timeline17, it adds a new event (the Yemen crisis).
119
+
120
+ __Social Timeline:__ compiled 6 timelines covering 4 major events from 2014. The timelines were collected from Wikipedia, NYTimes, and BBC.
121
+
122
+ ### Annotations
123
+
124
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
125
+
126
+ #### Annotation process
127
+
128
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
129
+
130
+ Timelines were originally collected from various news websites (CNN, BBC, NYTimes, etc.), many events have more than one timeline. Since each timeline covers the same underlying event, we merge them using timestamps to create a single timeline per event. During this merging process, we often end up with more than one update text per timestamp with possibly duplicate content. We ask the annotators to first rewrite the input updates to remove any duplicate content. Our annotation process for each news event contains the following three steps:
131
+
132
+ 1. Read the input timeline to get a high-level understanding of the event.
133
+ 2. For each timestep, read the provided 'rough' update summary. Rewrite the update into a short paragraph, removing any duplicate or previously reported subevents.
134
+ 3. Go through the timeline in a sequential manner and write a background summary for each timestep.
135
+
136
+ #### Who are the annotators?
137
+
138
+ <!-- This section describes the people or systems who created the annotations. -->
139
+
140
+ We hired three professional annotators. For each timeline, we collect three independent (rewritten) update and (new) background pairs.
141
+
142
+ #### Personal and Sensitive Information
143
+
144
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
145
+
146
+ To the best of our knowledge, there is no personal or sensitive information in this dataset.
147
+
148
+ ## Bias, Risks, and Limitations
149
+
150
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
151
+
152
+ ### Limitations
153
+
154
+ __Personalized Backgrounds:__ While a background summary can be useful to any news reader, the utility can vary depending on the reader's familiarity with the event. This dataset doesn't include any backgrounds customized to individual readers.
155
+
156
+ __Local Events:__ This dataset is limited to globally popular events involving disasters and conflicts. We leave the task of collecting background summaries for local events to future work.
157
+
158
+ __Background from News Articles:__ Background summaries can also be generated directly from news articles. In this dataset, we only consider background summaries based on past news updates. We leave the extension to news articles to future work.
159
+
160
+ ## Citation
161
+
162
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
163
+
164
+ __BibTeX:__
165
+
166
+ ```bibtex
167
+ @article{pratapa-etal-2023-background,
168
+ title = {Background Summarization of Event Timelines},
169
+ author = {Pratapa, Adithya and Small, Kevin and Dreyer, Markus},
170
+ publisher = {EMNLP},
171
+ year = {2023},
172
+ url = {https://arxiv.org/abs/2310.16197},
173
+ }
174
+ ```
175
+
176
+ ## Glossary
177
+
178
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
179
+
180
+ __Major event:__ the key news story for which we are constructing a timeline. For instance, 'Egyptian Crisis', 'BP oil spill', 'MH 370 disappearance' are some of the super events from our dataset.
181
+
182
+ __Timeline:__ a series of timesteps. Each timestep in a timeline is associated with an update and a background summary.
183
+
184
+ __Timestep:__ day of the event (`yyyy-mm-dd`).
185
+
186
+ __Update:__ a short text summary of _what's new_ in the news story. This text summarizes the latest events, specifically ones that are important to the overall story.
187
+
188
+ __Background:__ a short text summary that provides _sufficient historical context_ for the current update. Background aims to provide the reader a quick history of the news story, without them having to read all the previous updates. Background should cover past events that help in understanding the current events described in the update.
189
+
190
+ ## Dataset Card Authors
191
+
192
+ Adithya Pratapa, Kevin Small, Markus Dreyer
193
+
194
+ ## Dataset Card Contact
195
+
196
+ [Adithya Pratapa](https://apratapa.xyz)
background-summaries.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ HF dataset loading script
3
+ """
4
+
5
+ import re
6
+ from pathlib import Path
7
+
8
+ import datasets
9
+ import pandas as pd
10
+
11
+ _DESCRIPTION = """Update-background tuples for 14 news event timelines."""
12
+
13
+ _URLS = {
14
+ "events": "events.tar.gz",
15
+ "train": "splits/train.txt",
16
+ "dev": "splits/dev.txt",
17
+ "test": "splits/test.txt",
18
+ }
19
+
20
+ _CITATION = """\
21
+ @article{pratapa-etal-2023-background,
22
+ title = {Background Summarization of Event Timelines},
23
+ author = {Pratapa, Adithya and Small, Kevin and Dreyer, Markus},
24
+ publisher = {EMNLP},
25
+ year = {2023}
26
+ }
27
+ """
28
+ _HOMEPAGE = "https://github.com/amazon-science/background-summaries"
29
+ _LICENSE = "CC-BY-NC-4.0"
30
+
31
+
32
+ class BackgroundSummConfig(datasets.BuilderConfig):
33
+ def __init__(self, features, **kwargs) -> None:
34
+ super().__init__(version=datasets.Version("1.0.0"), **kwargs)
35
+ self.features = features
36
+
37
+
38
+ class BackgroundSumm(datasets.GeneratorBasedBuilder):
39
+ VERSION = datasets.Version("1.0.0")
40
+ BUILDER_CONFIGS = [
41
+ BackgroundSummConfig(
42
+ name="background-summ",
43
+ description=_DESCRIPTION,
44
+ features=["src", "tgt", "z"],
45
+ )
46
+ ]
47
+
48
+ def _info(self):
49
+ return datasets.DatasetInfo(
50
+ description=_DESCRIPTION,
51
+ features=datasets.Features(
52
+ {field: datasets.Value("string") for field in ["src", "tgt", "z"]}
53
+ ),
54
+ homepage=_HOMEPAGE,
55
+ license=_LICENSE,
56
+ )
57
+
58
+ def _split_generators(self, dl_manager):
59
+ dl_files = dl_manager.download_and_extract(_URLS)
60
+ return [
61
+ datasets.SplitGenerator(
62
+ name=datasets.Split.TRAIN,
63
+ gen_kwargs={
64
+ "events_path": Path(dl_files["events"]),
65
+ "splits_path": Path(dl_files["train"]),
66
+ },
67
+ ),
68
+ datasets.SplitGenerator(
69
+ name=datasets.Split.VALIDATION,
70
+ gen_kwargs={
71
+ "events_path": Path(dl_files["events"]),
72
+ "splits_path": Path(dl_files["dev"]),
73
+ },
74
+ ),
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TEST,
77
+ gen_kwargs={
78
+ "events_path": Path(dl_files["events"]),
79
+ "splits_path": Path(dl_files["test"]),
80
+ },
81
+ ),
82
+ ]
83
+
84
+ def _generate_examples(self, events_path: Path, splits_path: Path):
85
+ # load events for the split
86
+ with open(splits_path, "r") as rf:
87
+ event_names = [line.strip() for line in rf.readlines()]
88
+
89
+ data_idx = 0
90
+ for event in event_names:
91
+ # separately load update and background summaries for each annotator
92
+ annotators = ["annotator1", "annotator2", "annotator3"]
93
+ for ann in annotators:
94
+ # load tsv path
95
+ tsv_path = events_path / "events" / event / f"{ann}.tsv"
96
+ df = pd.read_csv(tsv_path, sep="\t")
97
+ df = df.fillna("")
98
+ timestamps, updates, backgrounds = [], [], []
99
+ for idx, row in enumerate(df.itertuples()):
100
+ ts = row.Date.strip("[]")
101
+ update = row.Update.replace("\\n", " ")
102
+ update = re.sub(r"[ ]+", r" ", update).strip()
103
+ background = row.Background.replace("\\n", " ")
104
+ background = re.sub(r"[ ]+", r" ", background).strip()
105
+
106
+ timestamps += [ts]
107
+ updates += [update]
108
+ backgrounds += [background]
109
+
110
+ # source is a timestamped concatenation of past updates
111
+ src = [
112
+ f"Date: {_ts}, Update: {_update}"
113
+ for _ts, _update in zip(timestamps[:-1], updates[:-1])
114
+ ]
115
+ src = " ".join(src)
116
+ # target is current background
117
+ tgt = backgrounds[-1]
118
+ # guidance is current update
119
+ z = f"Date: {ts}, Update: {updates[-1]}"
120
+
121
+ if idx > 0:
122
+ yield data_idx, {"src": src, "tgt": tgt, "z": z}
123
+ data_idx += 1
events.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d41f4c26b726dd3331499dbdc0d26ce8d5e76c84942f0a3bfce0db17b684c058
3
+ size 521320
splits/dev.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ bp_oil_spill
2
+ haitian_earthquake
3
+ mj_death
splits/test.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ egyptian_crisis
2
+ gaza_conflict
3
+ libyan_war
4
+ mh370_disappearance
5
+ nsa_leak
6
+ syrian_crisis
7
+ ukraine_conflict
8
+ yemen_crisis
splits/train.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ financial_crisis
2
+ iraq_war
3
+ swine_flu