--- license: cc-by-sa-4.0 task_categories: - question-answering - table-question-answering - text-generation language: - en tags: - croissant pretty_name: UDA-QA size_categories: - 10K #### Descriptive Statistics Sub Dataset (folder_name) | Source Domain | Doc Format | Doc Num | Q&A Num | Avg #Words | Avg #Pages |Q&A Types --- | --- | --- | --- | --- | ---| --- |--- FinHybrid (fin) | finance reports | PDF | 788 | 8190 | 76.6k | 147.8 |arithmetic TatHybrid (tat)| finance reports |PDF|170|14703|77.5k|148.5|extractive, counting, arithmetic PaperTab (paper_tab)| academic papers |PDF|307|393|6.1k|11.0|extractive, yes/no, free-form PaperText (paper_text)| academic papers | PDF|1087|2804|5.9k|10.6|extractive, yes/no, free-form FetaTab (feta)| wikipedia |PDF & HTML|878|1023|6.0k|14.9|free-form NqText (nq)| wikipedia |PDF & HTML|645|2477|6.1k|14.9|extractive #### Data Fields Field Name | Field Value | Description| Example --- | --- | ---|--- doc_name | string | name of the source document | 1912.01214 q_uid | string | unique id of the question | 9a05a5f4351db75da371f7ac12eb0b03607c4b87 question | string | raised question | which datasets did they experiment with? answer
or answer_1, answer_2
or short_answer, long_answer | string | ground truth answer/answers | Europarl, MultiUN **Additional Notes:** Some sub-datasets may have multiple ground_truth answers, where the answers are organized as `answer_1`, `answer_2` (in FinHybrid, PaperTab and PaperText) or `short_answer`, `long_answer` (in NqText); In sub-dataset TatHybrid, the answer is organized as a sequence, due to the involvement of the multi-span Q&A type. Additionally, some sub-datasets may have unique data fields. For example, `doc_url` in FetaTab and NqText describes the Wikipedia URL page, while `answer_type` and `answer_scale` in TatHybrid provide extended answer references. ## Dataset Creation ### Source Data #### Data Collection and Processing We collect the Q&A labels from the open-released datasets (i.e., source datasets), which are all annotated by human participants. Then we conduct a series of essential constructing actions, including source-document identification, categorization, filtering, data transformation. #### Who are the source data producers? [1] CHEN, Z., CHEN, W., SMILEY, C., SHAH, S., BOROVA, I., LANGDON, D., MOUSSA, R., BEANE, M., HUANG, T.-H., ROUTLEDGE, B., ET AL. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 (2021). [2] ZHU, F., LEI, W., FENG, F., WANG, C., ZHANG, H., AND CHUA, T.-S. Towards complex document understanding by discrete reasoning. In Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4857–4866. [3] DASIGI, P., LO, K., BELTAGY, I., COHAN, A., SMITH, N. A., AND GARDNER, M. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021). [4] NAN, L., HSIEH, C., MAO, Z., LIN, X. V., VERMA, N., ZHANG, R., KRYS ́ CIN ́ SKI, W., SCHOELKOPF, H., KONG, R., TANG, X., ET AL. Fetaqa: Free-form table question answering. Transactions of the Association for Computational Linguistics 10 (2022), 35–49. [5] KWIATKOWSKI, T., PALOMAKI, J., REDFIELD, O., COLLINS, M., PARIKH, A., ALBERTI, C., EPSTEIN, D., POLOSUKHIN, I., DEVLIN, J., LEE, K., ET AL. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466. ## Considerations for Using the Data #### Personal and Sensitive Information The dataset doesn't contain data that might be considered personal, sensitive, or private. The sources of data are publicly available reports, papers and wikipedia pages, which have been commonly utilized and accepted by the broader community. ## Dataset Card Contact qinchuanhui@gmail.com