Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
618516f
1 Parent(s): b5d7b18

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (3) hide show
  1. README.md +59 -36
  2. dataset_infos.json +1 -1
  3. tweet_qa.py +9 -11
README.md CHANGED
@@ -6,7 +6,7 @@ language_creators:
6
  languages:
7
  - en
8
  licenses:
9
- - unknown
10
  multilinguality:
11
  - monolingual
12
  size_categories:
@@ -18,6 +18,7 @@ task_categories:
18
  task_ids:
19
  - open-domain-qa
20
  paperswithcode_id: tweetqa
 
21
  ---
22
 
23
  # Dataset Card for TweetQA
@@ -48,74 +49,87 @@ paperswithcode_id: tweetqa
48
 
49
  ## Dataset Description
50
 
51
- - **Homepage: [TweetQA homepage](https://tweetqa.github.io/)**
52
- - **Repository: **
53
- - **Paper: [TweetQA paper]([TweetQA repository](https://tweetqa.github.io/)**
54
- - **Leaderboard:**
55
- - **Point of Contact: [Wenhan Xiong](xwhan@cs.ucsb.edu)**
56
 
57
  ### Dataset Summary
58
 
 
59
 
60
  ### Supported Tasks and Leaderboards
61
 
62
- [More Information Needed]
63
 
64
  ### Languages
65
 
66
- [More Information Needed]
67
 
68
  ## Dataset Structure
69
 
70
  ### Data Instances
 
71
  Sample data:
 
72
  ```
73
  {
74
- "Question": "who is the tallest host?",
75
- "Answer": ["sam bee","sam bee"],
76
- "Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017",
77
- "qid": "3554ee17d86b678be34c4dc2c04e334f"
78
  }
79
  ```
 
 
 
80
  ### Data Fields
81
 
82
- Question: a question based on information from a tweet
83
- Answer: list of possible answers from the tweet
84
- Tweet: source tweet
85
- qid: question id
86
 
87
  ### Data Splits
88
 
89
- The dataset is split in train, validation and test.
90
- The test split doesn't include answers so the Answer field is an empty list.
91
-
92
- [More Information Needed]
93
 
94
  ## Dataset Creation
95
 
96
  ### Curation Rationale
97
 
98
- With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on realtime knowledge. While previous datasets have concentrated on question answering (QA) for formal text like news and Wikipedia, we present the first large-scale dataset for QA over social media data. To ensure that the tweets we collected are useful, we only gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, we allow the answers to be abstractive
99
 
100
  ### Source Data
101
 
102
  #### Initial Data Collection and Normalization
103
 
104
- We first describe the three-step data collection process of TWEETQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TWEETQA and discuss several evaluation metrics. To better understand the characteristics of the TWEETQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set.
 
 
 
105
 
106
  #### Who are the source language producers?
107
 
108
- [More Information Needed]
109
 
110
  ### Annotations
111
 
112
  #### Annotation process
113
 
114
- [More Information Needed]
 
 
 
 
 
 
 
115
 
116
  #### Who are the annotators?
117
 
118
- [More Information Needed]
119
 
120
  ### Personal and Sensitive Information
121
 
@@ -129,7 +143,13 @@ We first describe the three-step data collection process of TWEETQA: tweet crawl
129
 
130
  ### Discussion of Biases
131
 
132
- [More Information Needed]
 
 
 
 
 
 
133
 
134
  ### Other Known Limitations
135
 
@@ -137,25 +157,28 @@ We first describe the three-step data collection process of TWEETQA: tweet crawl
137
 
138
  ## Additional Information
139
 
 
 
 
140
  ### Dataset Curators
141
 
142
- [Wenhan Xiong](xwhan@cs.ucsb.edu) of UCSB
143
 
144
  ### Licensing Information
145
 
146
- [More Information Needed]
147
 
148
  ### Citation Information
149
 
150
- @misc{xiong2019tweetqa,
151
- title={TWEETQA: A Social Media Focused Question Answering Dataset},
152
- author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},
153
- year={2019},
154
- eprint={1907.06292},
155
- archivePrefix={arXiv},
156
- primaryClass={cs.CL}
157
  }
 
158
 
159
  ### Contributions
160
 
161
- Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
 
6
  languages:
7
  - en
8
  licenses:
9
+ - cc-by-sa-4-0
10
  multilinguality:
11
  - monolingual
12
  size_categories:
 
18
  task_ids:
19
  - open-domain-qa
20
  paperswithcode_id: tweetqa
21
+ pretty_name: TweetQA
22
  ---
23
 
24
  # Dataset Card for TweetQA
 
49
 
50
  ## Dataset Description
51
 
52
+ - **Homepage:** [TweetQA homepage](https://tweetqa.github.io/)
53
+ - **Repository:**
54
+ - **Paper:** [TWEETQA: A Social Media Focused Question Answering Dataset](https://arxiv.org/abs/1907.06292)
55
+ - **Leaderboard:** [TweetQA Leaderboard](https://tweetqa.github.io/)
56
+ - **Point of Contact:** [Wenhan Xiong](xwhan@cs.ucsb.edu)
57
 
58
  ### Dataset Summary
59
 
60
+ With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer.
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
+ - `question-answering`: The dataset can be used to train a model for Open-Domain Question Answering where the task is to answer the given questions for a tweet. The performance is measured by comparing the model answers to the the annoted groundtruth and calculating the BLEU-1/Meteor/ROUGE-L score. This task has an active leaderboard which can be found [here](https://tweetqa.github.io/) and ranks models based on [BLEU-1](https://huggingface.co/metrics/blue), [Meteor](https://huggingface.co/metrics/meteor) and [ROUGLE-L](https://huggingface.co/metrics/rouge).
65
 
66
  ### Languages
67
 
68
+ English.
69
 
70
  ## Dataset Structure
71
 
72
  ### Data Instances
73
+
74
  Sample data:
75
+
76
  ```
77
  {
78
+ "Question": "who is the tallest host?",
79
+ "Answer": ["sam bee","sam bee"],
80
+ "Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017",
81
+ "qid": "3554ee17d86b678be34c4dc2c04e334f"
82
  }
83
  ```
84
+
85
+ The test split doesn't include answers so the Answer field is an empty list.
86
+
87
  ### Data Fields
88
 
89
+ - `Question`: a question based on information from a tweet
90
+ - `Answer`: list of possible answers from the tweet
91
+ - `Tweet`: source tweet
92
+ - `qid`: question id
93
 
94
  ### Data Splits
95
 
96
+ The dataset is split in train, validation and test set. The train set cointains 10692 examples, the validation set 1086 and the test set 1979 examples.
 
 
 
97
 
98
  ## Dataset Creation
99
 
100
  ### Curation Rationale
101
 
102
+ With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer.
103
 
104
  ### Source Data
105
 
106
  #### Initial Data Collection and Normalization
107
 
108
+ The authors look into the the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, they first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Then, they filter out the tweets that heavily rely on attached media to convey information, for which they utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 (He et al., 2017) to analyze the predicate-argument structure of the tweets collected from news articles and keep
109
+ only the tweets with more than two labeled arguments. This filtering process also automatically
110
+ filters out most of the short tweets. For the tweets collected from CNN, 22.8% of them were filtered
111
+ via semantic role labeling. For tweets from NBC, 24.1% of the tweets were filtered.
112
 
113
  #### Who are the source language producers?
114
 
115
+ Twitter users.
116
 
117
  ### Annotations
118
 
119
  #### Annotation process
120
 
121
+ The Amazon Mechanical Turk workers were used to collect question-answer
122
+ pairs for the filtered tweets. For each Human Intelligence Task (HIT), the authors ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, they require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than 95%. Since the authors use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, they use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions. To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge, the authors explicitly state the following items in the HIT instructions for question writing:
123
+ - No Yes-no questions should be asked.
124
+ - The question should have at least five words.
125
+ - Videos, images or inserted links should not
126
+ be considered.
127
+ - No background knowledge should be required to answer the question.
128
+ To help the workers better follow the instructions, they also include a representative example showing both good and bad questions or answers in the instructions. As for the answers, since the context they consider is relatively shorter than the context of previous datasets, they do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words, but the authors require the answers to be brief and can be directly inferred from the tweets. After they retrieve the QA pairs from all HITs, they conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. They remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered 13% of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. All QA pairs were written by 492 individual workers.
129
 
130
  #### Who are the annotators?
131
 
132
+ Amazon Mechanical Turk workers.
133
 
134
  ### Personal and Sensitive Information
135
 
 
143
 
144
  ### Discussion of Biases
145
 
146
+ From the paper:
147
+ > It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table 1.
148
+
149
+ > Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data:
150
+ - Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors.
151
+ - Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires understanding some tweet-specific English, like conversation-style English.
152
+ - Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person- or event-related questions.
153
 
154
  ### Other Known Limitations
155
 
 
157
 
158
  ## Additional Information
159
 
160
+ The annotated answers are validated by the authors as follows:
161
+ For the purposes of human performance evaluation and inter-annotator agreement checking, the authors launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA” if they think the questions are not answerable. They find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, they manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that 90% of the answers pairs are semantically equivalent, 2% of them are partially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. 59 individual workers participated in this process.
162
+
163
  ### Dataset Curators
164
 
165
+ Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang.
166
 
167
  ### Licensing Information
168
 
169
+ CC BY-SA 4.0.
170
 
171
  ### Citation Information
172
 
173
+ ```
174
+ @inproceedings{xiong2019tweetqa,
175
+ title={TweetQA: A Social Media Focused Question Answering Dataset},
176
+ author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang},
177
+ booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
178
+ year={2019}
 
179
  }
180
+ ```
181
 
182
  ### Contributions
183
 
184
+ Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"default": {"description": " TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.\n", "citation": "@misc{xiong2019tweetqa,\n title={TWEETQA: A Social Media Focused Question Answering Dataset},\n author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},\n year={2019},\n eprint={1907.06292},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://tweetqa.github.io/", "license": "CC BY-SA 4.0", "features": {"Question": {"dtype": "string", "id": null, "_type": "Value"}, "Answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "Tweet": {"dtype": "string", "id": null, "_type": "Value"}, "qid": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tweet_qa", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3386268, "num_examples": 10692, "dataset_name": "tweet_qa"}, "test": {"name": "test", "num_bytes": 473734, "num_examples": 1979, "dataset_name": "tweet_qa"}, "validation": {"name": "validation", "num_bytes": 408535, "num_examples": 1086, "dataset_name": "tweet_qa"}}, "download_checksums": {"https://sites.cs.ucsb.edu/~xwhan/datasets/tweetqa.zip": {"num_bytes": 1573980, "checksum": "e0db1b71836598aaea8785f1911369b5bca0d839504b97836eb5cb7427c7e4d9"}}, "download_size": 1573980, "post_processing_size": null, "dataset_size": 4268537, "size_in_bytes": 5842517}}
 
1
+ {"default": {"description": "TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.\n", "citation": "@inproceedings{xiong2019tweetqa,\n title={TweetQA: A Social Media Focused Question Answering Dataset},\n author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang},\n booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},\n year={2019}\n}\n", "homepage": "https://tweetqa.github.io/", "license": "CC BY-SA 4.0", "features": {"Question": {"dtype": "string", "id": null, "_type": "Value"}, "Answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "Tweet": {"dtype": "string", "id": null, "_type": "Value"}, "qid": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "tweet_qa", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2770036, "num_examples": 10692, "dataset_name": "tweet_qa"}, "test": {"name": "test", "num_bytes": 473730, "num_examples": 1979, "dataset_name": "tweet_qa"}, "validation": {"name": "validation", "num_bytes": 295435, "num_examples": 1086, "dataset_name": "tweet_qa"}}, "download_checksums": {"https://sites.cs.ucsb.edu/~xwhan/datasets/tweetqa.zip": {"num_bytes": 1573980, "checksum": "e0db1b71836598aaea8785f1911369b5bca0d839504b97836eb5cb7427c7e4d9"}}, "download_size": 1573980, "post_processing_size": null, "dataset_size": 3539201, "size_in_bytes": 5113181}}
tweet_qa.py CHANGED
@@ -22,18 +22,16 @@ import datasets
22
 
23
 
24
  _CITATION = """\
25
- @misc{xiong2019tweetqa,
26
- title={TWEETQA: A Social Media Focused Question Answering Dataset},
27
- author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},
28
- year={2019},
29
- eprint={1907.06292},
30
- archivePrefix={arXiv},
31
- primaryClass={cs.CL}
32
  }
33
  """
34
 
35
  _DESCRIPTION = """\
36
- TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.
37
  """
38
 
39
  _HOMEPAGE = "https://tweetqa.github.io/"
@@ -101,12 +99,12 @@ class TweetQA(datasets.GeneratorBasedBuilder):
101
 
102
  with open(filepath, encoding="utf-8") as f:
103
  tweet_qa = json.load(f)
 
104
  for data in tweet_qa:
105
- id_ = data["qid"]
106
-
107
- yield id_, {
108
  "Question": data["Question"],
109
  "Answer": [] if split == "test" else data["Answer"],
110
  "Tweet": data["Tweet"],
111
  "qid": data["qid"],
112
  }
 
 
22
 
23
 
24
  _CITATION = """\
25
+ @inproceedings{xiong2019tweetqa,
26
+ title={TweetQA: A Social Media Focused Question Answering Dataset},
27
+ author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang},
28
+ booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
29
+ year={2019}
 
 
30
  }
31
  """
32
 
33
  _DESCRIPTION = """\
34
+ TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.
35
  """
36
 
37
  _HOMEPAGE = "https://tweetqa.github.io/"
 
99
 
100
  with open(filepath, encoding="utf-8") as f:
101
  tweet_qa = json.load(f)
102
+ idx = 0
103
  for data in tweet_qa:
104
+ yield idx, {
 
 
105
  "Question": data["Question"],
106
  "Answer": [] if split == "test" else data["Answer"],
107
  "Tweet": data["Tweet"],
108
  "qid": data["qid"],
109
  }
110
+ idx += 1