--- license: cc-by-sa-3.0 task_categories: - table-question-answering - question-answering language: - en tags: - documents - tables - VQA pretty_name: WikiDT size_categories: - 100K, and is additionally annotated with retrieval labels (which subpage, and which table). 53,698 QA samples also have SQL annotation. For each subpage, OCR and table extraction annotations from two sources are available. While rendering the screenshots, the ground truth table annotation is recorded. Meanwhile, to make the dataset realistic, we also requested OCR and table extraction from [Amazon Textract](https://aws.amazon.com/textract/) for each subpage (results obtained during Feb.28, 2023 - Mar.6, 2023). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure The WikiDT dataset has the following file structure. ```sh +--WikiDT-dataset | +--WikiTableExtraction | | +--detection | | | +--images # sub page images | | | +--train # xml table bbox annotation | | | +--test # xml table bbox annotation | | | +--val # xml table bbox annotation | | | images_filelist.txt # index of 54,032 images | | | test_filelist.txt # index of 5,410 test samples | | | train_filelist.txt # index of 43,248 train samples | | | val_filelist.txt # index of 5,347 val samples | | +--structure | | | +--images # images cropped to table region | | | +--train # xml table bbox annotation | | | +--test # xml table bbox annotation | | | +--val # xml table bbox annotation | | | images_filelist.txt # index of 159,898 images | | | test_filelist.txt # index of 15,989 test samples | | | train_filelist.txt # index of 129,980 train samples | | | val_filelist.txt # index of 15,991 val samples | +--samples # in total 70,652 TableVQA samples from the three json files | | +--train.json # | | +--test.json # | | +--val.json # | +--images # full page image | +--ocr # text and bbox for the table content | | +--textract # detected by Amazon Textract API | | +--web # extracted from HTML information | +--tsv # extracted table in tsv format | | +--textract # detected by Amazon Textract API | | +--web # extracted from HTML information ``` ### Table VQA annotation example Here is an example of an xml table bbox annotation from `WikiDT-dataset/samples/[train|test|val].json/`. ``` {'all_ocr_files_textract': ['ocr/textract/16301437_page_seg_0.json', 'ocr/textract/16301437_page_seg_1.json'], 'all_ocr_files_web': ['ocr/web/16301437_page_seg_0.json', 'ocr/web/16301437_page_seg_1.json'], 'all_table_files_textract': ['tsv/textract/16301437_page_0.tsv', 'tsv/textract/16301437_page_1.tsv'], 'all_table_files_web': ['tsv/web/16301437_1.tsv', 'tsv/web/16301437_0.tsv'], 'answer': [['don johnson buckeye st. classic']], 'image': '16301437_page.png', 'ocr_retrieval_file_textract': 'ocr/textract/16301437_page_seg_0.json', 'ocr_retrieval_file_web': 'ocr/web/16301437_page_seg_0.json', 'question': 'Name the Event which has a Score of 209-197?', 'sample_id': '14190', 'sql_str': "SELECT `event` FROM cur_table WHERE `score` = '209-197' ", 'sub_page': ['16301437_page_seg_0.png', '16301437_page_seg_1.png'], 'sub_page_retrieved': '16301437_page_seg_0.png', 'subset': 'TFC', 'table_id': '2-16301437-1', 'table_retrieval_file_textract': 'tsv/textract/16301437_page_0.tsv', 'table_retrieval_file_web': 'tsv/web/16301437_1.tsv'} ``` ### Table Detection annotation example Here is an example of an xml table bbox annotation from `WikiDT-dataset/WikiTableExtraction/structure/[train|test|val]/`. ```xml 204_147_page_crop_5.png WikiDT Dataset 788 540.0 3 table 10 10 778 530 header row 10 10 778 33 header cell 10 12 35 776 58 table row 10 60 778 530 ``` ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]