Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/39803888362826-config-parquet-and-info-sal4ahm-RealCQA-c4c71373/downloads/9212b78d89e017d6dceb98db3a899bcf4272fecc327f7f899d4ec44ce9c55099.incomplete'
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
0images
End of preview.

RealCQA: Real-World Complex Question Answering Dataset

This repository contains the dataset used in the paper "RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic" (ICDAR 2023). The dataset is designed to facilitate research in complex question answering, involving a diverse set of real-world images and associated textual question-answer pairs.

Dataset Overview

The RealCQA dataset consists of 28,266 images, and corresponding 2 million question-answer pairs organized into three complementary subsets. Each image is accompanied by a JSON file containing one or more question blocks. The dataset is structured to address a range of question-answering tasks that require an understanding of the visual content.

Dataset Structure

The dataset is organized into the following folders:

  • Images

    • images: Contains the first 10,000 images.
    • images2: Contains the next 10,000 images.
    • images3: Contains the remaining 8,266 images.
  • JSON Files

    • jsons: Contains the JSON files corresponding to the images in the images folder.
    • jsons2: Contains the JSON files corresponding to the images in the images2 folder.
    • jsons3: Contains the JSON files corresponding to the images in the images3 folder.
  • QA Files These are the QA created in our proposed dataset.

    • qa: Contains the QA files corresponding to the images in the images folder.
    • qa2: Contains the QA files corresponding to the images in the images2 folder.
    • qa3: Contains the QA files corresponding to the images in the images3 folder.

File Details

  • Images: JPEG files named in the format PMCxxxxxx_abc.jpg, where xxxxxx represents the PubMed Central ID and abc represents an identifier specific to the image.
  • JSON Files: JSON files named in the same format as the images. These are groundtruth annotations from the https://chartinfo.github.io challenge, they provide annotations for chart type, text(OCR), text location, text type (axis/tick/legend), data used to plot the chart.
  • QA Files: QA files named in the same format as the images. Each QA file is a list of question blocks associated with the corresponding image we created in our proposed dataset.

QA Structure

Each QA file contains a list of question blocks in the following format:

[
    {
        "taxonomy id": "2j",
        "QID": "16",
        "question": "Are all the bars in the chart visually horizontal?",
        "answer": "no",
        "answer_type": "Binary",
        "qa_id": "XbUzFtjqsEOF",
        "PMC_ID": "PMC8439477___g003"
    },
    {
        "taxonomy id": "1a",
        "QID": "7a",
        "question": "What is the type of chart?",
        "answer": "Vertical Bar chart",
        "answer_type": "String",
        "qa_id": "wzcdDijkrHtt",
        "PMC_ID": "PMC8439477___g003"
    }
]

Dataset Loader

To facilitate loading and using the dataset, we provide a custom dataset loader script, dataset.py. This script defines a PyTorch Dataset class to handle loading, preprocessing, and batching of the images and question-answer pairs.

How to Use the Dataset Loader

  1. Setup and Requirements

    Ensure you have the following Python packages installed:

    pip install torch torchvision Pillow
    
  2. Dataset Loader Script

    Use the provided dataset.py to load the dataset. The script is designed to load the dataset efficiently and handle both training and testing cases.

    from dataset import RQADataset
    from torch.utils.data import DataLoader
    
    dataset = RQADataset(data_dir='.', split='train') # split='test' for RQA9357 split used in the paper
    
     # Test loading a single item
     print(f"Number of samples in dataset: {len(dataset)}")
     sample = dataset[0]
     print("Sample data:", sample)
    
     # Initialize DataLoader
     dataloader = DataLoader(dataset, batch_size=4, collate_fn=RQADataset.custom_collate)
    
     # Test DataLoader
     for batch in dataloader:
         print("Batch data:", batch)
         break  # Load only one batch for testing
    

Citation

If you use this dataset in your research, please cite the following paper:

@InProceedings{10.1007/978-3-031-41682-8_5,
author="Ahmed, Saleem
and Jawade, Bhavin
and Pandey, Shubham
and Setlur, Srirangaraj
and Govindaraju, Venu",
editor="Fink, Gernot A.
and Jain, Rajiv
and Kise, Koichi
and Zanibbi, Richard",
title="RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic",
booktitle="Document Analysis and Recognition - ICDAR 2023",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="66--83",
abstract="We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. Despite efforts to tackle this problem using synthetic charts, solutions are limited by the shortage of annotated real-world data. To fill this gap, we introduce a benchmark and dataset for chart visual QA on real-world charts, offering a systematic analysis of the task and a novel taxonomy for template-based chart question creation. Our contribution includes the introduction of a new answer type, `list', with both ranked and unranked variations. Our study is conducted on a real-world chart dataset from scientific literature, showcasing higher visual complexity compared to other works. Our focus is on template-based QA and how it can serve as a standard for evaluating the first-order logic capabilities of models. The results of our experiments, conducted on a real-world out-of-distribution dataset, provide a robust evaluation of large-scale pre-trained models and advance the field of chart visual QA and formal logic verification for neural networks in general. Our code and dataset is publicly available (https://github.com/cse-ai-lab/RealCQA).",
isbn="978-3-031-41682-8"
}

}

License

This dataset is licensed under the MIT License. By using this dataset, you agree to abide by its terms and conditions.

Contact

For any questions or issues, please contact the authors of the paper or open an issue in this repository.

Downloads last month
3

Models trained or fine-tuned on sal4ahm/RealCQA