Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
a900ef4
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +155 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. tweet_qa.py +112 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - open-domain-qa
20
+ ---
21
+
22
+ # Dataset Card for TweetQA
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits](#data-splits)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage: [TweetQA homepage](https://tweetqa.github.io/)**
50
+ - **Repository: **
51
+ - **Paper: [TweetQA paper]([TweetQA repository](https://tweetqa.github.io/)**
52
+ - **Leaderboard:**
53
+ - **Point of Contact: [Wenhan Xiong](xwhan@cs.ucsb.edu)**
54
+
55
+ ### Dataset Summary
56
+
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ [More Information Needed]
61
+
62
+ ### Languages
63
+
64
+ [More Information Needed]
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+ Sample data:
70
+ ```
71
+ {
72
+ "Question": "who is the tallest host?",
73
+ "Answer": ["sam bee","sam bee"],
74
+ "Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017",
75
+ "qid": "3554ee17d86b678be34c4dc2c04e334f"
76
+ }
77
+ ```
78
+ ### Data Fields
79
+
80
+ Question: a question based on information from a tweet
81
+ Answer: list of possible answers from the tweet
82
+ Tweet: source tweet
83
+ qid: question id
84
+
85
+ ### Data Splits
86
+
87
+ The dataset is split in train, validation and test.
88
+ The test split doesn't include answers so the Answer field is an empty list.
89
+
90
+ [More Information Needed]
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Curation Rationale
95
+
96
+ With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on realtime knowledge. While previous datasets have concentrated on question answering (QA) for formal text like news and Wikipedia, we present the first large-scale dataset for QA over social media data. To ensure that the tweets we collected are useful, we only gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, we allow the answers to be abstractive
97
+
98
+ ### Source Data
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ We first describe the three-step data collection process of TWEETQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TWEETQA and discuss several evaluation metrics. To better understand the characteristics of the TWEETQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set.
103
+
104
+ #### Who are the source language producers?
105
+
106
+ [More Information Needed]
107
+
108
+ ### Annotations
109
+
110
+ #### Annotation process
111
+
112
+ [More Information Needed]
113
+
114
+ #### Who are the annotators?
115
+
116
+ [More Information Needed]
117
+
118
+ ### Personal and Sensitive Information
119
+
120
+ [More Information Needed]
121
+
122
+ ## Considerations for Using the Data
123
+
124
+ ### Social Impact of Dataset
125
+
126
+ [More Information Needed]
127
+
128
+ ### Discussion of Biases
129
+
130
+ [More Information Needed]
131
+
132
+ ### Other Known Limitations
133
+
134
+ [More Information Needed]
135
+
136
+ ## Additional Information
137
+
138
+ ### Dataset Curators
139
+
140
+ [Wenhan Xiong](xwhan@cs.ucsb.edu) of UCSB
141
+
142
+ ### Licensing Information
143
+
144
+ [More Information Needed]
145
+
146
+ ### Citation Information
147
+
148
+ @misc{xiong2019tweetqa,
149
+ title={TWEETQA: A Social Media Focused Question Answering Dataset},
150
+ author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},
151
+ year={2019},
152
+ eprint={1907.06292},
153
+ archivePrefix={arXiv},
154
+ primaryClass={cs.CL}
155
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": " TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.\n", "citation": "@misc{xiong2019tweetqa,\n title={TWEETQA: A Social Media Focused Question Answering Dataset},\n author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},\n year={2019},\n eprint={1907.06292},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://tweetqa.github.io/", "license": "CC BY-SA 4.0", "features": {"Question": {"dtype": "string", "id": null, "_type": "Value"}, "Answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "Tweet": {"dtype": "string", "id": null, "_type": "Value"}, "qid": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tweet_qa", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3386268, "num_examples": 10692, "dataset_name": "tweet_qa"}, "test": {"name": "test", "num_bytes": 473734, "num_examples": 1979, "dataset_name": "tweet_qa"}, "validation": {"name": "validation", "num_bytes": 408535, "num_examples": 1086, "dataset_name": "tweet_qa"}}, "download_checksums": {"https://sites.cs.ucsb.edu/~xwhan/datasets/tweetqa.zip": {"num_bytes": 1573980, "checksum": "e0db1b71836598aaea8785f1911369b5bca0d839504b97836eb5cb7427c7e4d9"}}, "download_size": 1573980, "post_processing_size": null, "dataset_size": 4268537, "size_in_bytes": 5842517}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7bb12b30b0e48a689d06b5eb19e04586aeabe54fd5d53f3b7adbe38b2024629
3
+ size 2942
tweet_qa.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TWEETQA: A Social Media Focused Question Answering Dataset"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @misc{xiong2019tweetqa,
26
+ title={TWEETQA: A Social Media Focused Question Answering Dataset},
27
+ author={Wenhan Xiong and Jiawei Wu and Hong Wang and Vivek Kulkarni and Mo Yu and Shiyu Chang and Xiaoxiao Guo and William Yang Wang},
28
+ year={2019},
29
+ eprint={1907.06292},
30
+ archivePrefix={arXiv},
31
+ primaryClass={cs.CL}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.
37
+ """
38
+
39
+ _HOMEPAGE = "https://tweetqa.github.io/"
40
+
41
+ _LICENSE = "CC BY-SA 4.0"
42
+
43
+ _URL = "https://sites.cs.ucsb.edu/~xwhan/datasets/tweetqa.zip"
44
+
45
+
46
+ class TweetQA(datasets.GeneratorBasedBuilder):
47
+ """TweetQA: first large-scale dataset for QA over social media data"""
48
+
49
+ VERSION = datasets.Version("1.0.0")
50
+
51
+ def _info(self):
52
+ features = datasets.Features(
53
+ {
54
+ "Question": datasets.Value("string"),
55
+ "Answer": datasets.Sequence(datasets.Value("string")),
56
+ "Tweet": datasets.Value("string"),
57
+ "qid": datasets.Value("string"),
58
+ }
59
+ )
60
+ return datasets.DatasetInfo(
61
+ description=_DESCRIPTION,
62
+ features=features,
63
+ supervised_keys=None,
64
+ homepage=_HOMEPAGE,
65
+ license=_LICENSE,
66
+ citation=_CITATION,
67
+ )
68
+
69
+ def _split_generators(self, dl_manager):
70
+ """Returns SplitGenerators."""
71
+ data_dir = dl_manager.download_and_extract(_URL)
72
+ train_path = os.path.join(data_dir, "TweetQA_data", "train.json")
73
+ test_path = os.path.join(data_dir, "TweetQA_data", "test.json")
74
+ dev_path = os.path.join(data_dir, "TweetQA_data", "dev.json")
75
+ return [
76
+ datasets.SplitGenerator(
77
+ name=datasets.Split.TRAIN,
78
+ gen_kwargs={
79
+ "filepath": train_path,
80
+ "split": "train",
81
+ },
82
+ ),
83
+ datasets.SplitGenerator(
84
+ name=datasets.Split.TEST,
85
+ gen_kwargs={
86
+ "filepath": test_path,
87
+ "split": "test",
88
+ },
89
+ ),
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.VALIDATION,
92
+ gen_kwargs={
93
+ "filepath": dev_path,
94
+ "split": "dev",
95
+ },
96
+ ),
97
+ ]
98
+
99
+ def _generate_examples(self, filepath, split):
100
+ """ Yields examples. """
101
+
102
+ with open(filepath, encoding="utf-8") as f:
103
+ tweet_qa = json.load(f)
104
+ for data in tweet_qa:
105
+ id_ = data["qid"]
106
+
107
+ yield id_, {
108
+ "Question": data["Question"],
109
+ "Answer": [] if split == "test" else data["Answer"],
110
+ "Tweet": data["Tweet"],
111
+ "qid": data["qid"],
112
+ }