--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table 1. > Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data: - Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. - Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires understanding some tweet-specific English, like conversation-style English. - Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person- or event-related questions. ### Other Known Limitations [More Information Needed] ## Additional Information The annotated answers are validated by the authors as follows: For the purposes of human performance evaluation and inter-annotator agreement checking, the authors launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA” if they think the questions are not answerable. They find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, they manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that 90% of the answers pairs are semantically equivalent, 2% of them are partially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. 59 individual workers participated in this process. ### Dataset Curators Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang. ### Licensing Information CC BY-SA 4.0. ### Citation Information ``` @inproceedings{xiong2019tweetqa, title={TweetQA: A Social Media Focused Question Answering Dataset}, author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang}, booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year={2019} } ``` ### Contributions Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.