|
--- |
|
license: cc-by-sa-4.0 |
|
task_categories: |
|
- question-answering |
|
- table-question-answering |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- croissant |
|
pretty_name: UDA-QA |
|
size_categories: |
|
- 10K<n<100K |
|
config_names: |
|
- feta |
|
- nq |
|
- paper_text |
|
- paper_tab |
|
- fin |
|
- tat |
|
dataset_info: |
|
- config_name: feta |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
- name: doc_url |
|
dtype: string |
|
- config_name: nq |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: short_answer |
|
dtype: string |
|
- name: long_answer |
|
dtype: string |
|
- name: doc_url |
|
dtype: string |
|
- config_name: paper_text |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
- name: answer_3 |
|
dtype: string |
|
- config_name: paper_tab |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
- name: answer_3 |
|
dtype: string |
|
- config_name: fin |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
configs: |
|
- config_name: feta |
|
data_files: |
|
- split: test |
|
path: feta/test* |
|
- config_name: nq |
|
data_files: |
|
- split: test |
|
path: nq/test* |
|
- config_name: paper_text |
|
data_files: |
|
- split: test |
|
path: paper_text/test* |
|
- config_name: paper_tab |
|
data_files: |
|
- split: test |
|
path: paper_tab/test* |
|
- config_name: fin |
|
data_files: |
|
- split: test |
|
path: fin/test* |
|
- config_name: tat |
|
data_files: |
|
- split: test |
|
path: tat/test* |
|
--- |
|
# Dataset Card for Dataset Name |
|
|
|
UDA (Unstructured Document Analysis) is a benchmark suite for Retrieval Augmented Generation (RAG) in real-world document analysis. |
|
Each entry in the UDA dataset is organized as a *document-question-answer* triplet, where a question is raised from the document, accompanied by a corresponding ground-truth answer. |
|
The documents are retained in their original file formats without parsing or segmentation; |
|
they consist of both textual and tabular data, reflecting the complex nature of real-world analytical scenarios. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
|
|
- **Curated by:** Yulong Hui, Tsinghua University |
|
- **Language(s) (NLP):** English |
|
- **License:** CC-BY-SA-4.0 |
|
- **Repository:** https://github.com/qinchuanhui/UDA-Benchmark |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
Question-answering tasks on complete unstructured documents. |
|
|
|
After loading the dataset, you should also **download the sourc document files from the folder `src_doc_files`**. |
|
|
|
More usage guidelines please refer to https://github.com/qinchuanhui/UDA-Benchmark |
|
|
|
|
|
### Extended Use |
|
|
|
Evaluate the effectiveness of retrieval strategies using the evidence provided in the `extended_qa_info` folder. |
|
|
|
Directly assess the performance of LLMs in numerical reasoning and table reasoning, using the evidence in the `extended_qa_info` folder as context. |
|
|
|
Assess the effectiveness of parsing strategies on unstructured PDF documents. |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
#### Descriptive Statistics |
|
|
|
Sub Dataset (folder_name) | Source Domain | Doc Format | Doc Num | Q&A Num | Avg #Words | Avg #Pages |Q&A Types |
|
--- | --- | --- | --- | --- | ---| --- |--- |
|
FinHybrid (fin) | finance reports | PDF | 788 | 8190 | 76.6k | 147.8 |arithmetic |
|
TatHybrid (tat)| finance reports |PDF|170|14703|77.5k|148.5|extractive, counting, arithmetic |
|
PaperTab (paper_tab)| academic papers |PDF|307|393|6.1k|11.0|extractive, yes/no, free-form |
|
PaperText (paper_text)| academic papers | PDF|1087|2804|5.9k|10.6|extractive, yes/no, free-form |
|
FetaTab (feta)| wikipedia |PDF & HTML|878|1023|6.0k|14.9|free-form |
|
NqText (nq)| wikipedia |PDF & HTML|645|2477|6.1k|14.9|extractive |
|
|
|
|
|
|
|
#### Data Fields |
|
Field Name | Field Value | Description| Example |
|
--- | --- | ---|--- |
|
doc_name | string | name of the source document | 1912.01214 |
|
q_uid | string | unique id of the question | 9a05a5f4351db75da371f7ac12eb0b03607c4b87 |
|
question | string | raised question | which datasets did they experiment with? |
|
answer <br />or answer_1, answer_2 <br />or short_answer, long_answer | string | ground truth answer/answers | Europarl, MultiUN |
|
|
|
**Additional Notes:** Some sub-datasets may have multiple ground_truth answers, where the answers are organized as `answer_1`, `answer_2` (in FinHybrid, PaperTab and PaperText) or `short_answer`, `long_answer` (in NqText); In sub-dataset TatHybrid, the answer is organized as a sequence, due to the involvement of the multi-span Q&A type. Additionally, some sub-datasets may have unique data fields. For example, `doc_url` in FetaTab and NqText describes the Wikipedia URL page, while `answer_type` and `answer_scale` in TatHybrid provide extended answer references. |
|
|
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
#### Data Collection and Processing |
|
|
|
We collect the Q&A labels from the open-released datasets (i.e., source datasets), which are all annotated by human participants. |
|
Then we conduct a series of essential constructing actions, including source-document identification, categorization, filtering, data transformation. |
|
|
|
#### Who are the source data producers? |
|
|
|
[1] CHEN, Z., CHEN, W., SMILEY, C., SHAH, S., BOROVA, I., LANGDON, D., MOUSSA, R., BEANE, M., HUANG, T.-H., ROUTLEDGE, B., ET AL. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 (2021). |
|
|
|
[2] ZHU, F., LEI, W., FENG, F., WANG, C., ZHANG, H., AND CHUA, T.-S. Towards complex document understanding by discrete reasoning. In Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4857–4866. |
|
|
|
[3] DASIGI, P., LO, K., BELTAGY, I., COHAN, A., SMITH, N. A., AND GARDNER, M. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021). |
|
|
|
[4] NAN, L., HSIEH, C., MAO, Z., LIN, X. V., VERMA, N., ZHANG, R., KRYS ́ CIN ́ SKI, W., SCHOELKOPF, H., KONG, R., TANG, X., ET AL. Fetaqa: Free-form table question answering. Transactions of the Association for Computational Linguistics 10 (2022), 35–49. |
|
|
|
[5] KWIATKOWSKI, T., PALOMAKI, J., REDFIELD, O., COLLINS, M., PARIKH, A., ALBERTI, C., EPSTEIN, D., POLOSUKHIN, I., DEVLIN, J., LEE, K., ET AL. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466. |
|
|
|
|
|
## Considerations for Using the Data |
|
#### Personal and Sensitive Information |
|
|
|
The dataset doesn't contain data that might be considered personal, sensitive, or private. The sources of data are publicly available reports, papers and wikipedia pages, which have been commonly utilized and accepted by the broader community. |
|
|
|
|
|
|
|
<!-- ## Citation [optional] --> |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
<!-- **BibTeX:** --> |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
qinchuanhui@gmail.com |