Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for OVQA Instruction Set

Dataset Summary

Odia Visual Question Answering (OVQA) Instruction Set is a multimodal dataset comprising text and images structured in an instruction format, designed for developing Multimodal Large Language Models (MLLMs).

Supported Tasks and Leaderboards

Multimodal Large Language Model (MLLM)

Languages

Odia, English

Dataset Structure

JSON

Paper

For more details on data preparation, experiments, and evaluation, read the paper here:

Read the Paper

Licensing Information

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Citation Information

If you find this repository useful, please consider giving 👏 and citing:

@inproceedings{parida2025ovqa,
title = {{OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language}},
author = {Parida, Shantipriya and Sahoo, Shashikanta and Sekhar, Sambit and Sahoo, Kalyanamalini and Kotwal, Ketan and Khosla, Sonal and Dash, Satya Ranjan and Bose, Aneesh and Kohli, Guneet Singh and Lenka, Smruti Smita and Bojar, Ondřej},
year = {2025},
note = {Accepted at the IndoNLP Workshop at COLING 2025} }
Downloads last month
47