Datasets:
Formats:
parquet
Size:
10K - 100K
Dataset Card for Odia_VQA_Multimodal_Dataset
Dataset Summary
This dataset contains 27K English-Odia parallel instruction sets (Question-Answer) and 6k unique images. The dataset is useful for multimodal Visual Question-answering (VQA) in Odia and English. The instruction set format allows a multimodal large language model (LLM) fine-tuning.
The Odia data was annotated by the local language speakers and verified by linguists.
Supported Tasks and Leaderboards
Large Language Model (LLM)
Languages
Odia, English
Dataset Structure
JSON
Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Citation Information
If you use the OVQA dataset, please consider giving 👏 and cite the following paper:
@inproceedings{parida2025ovqa,
title = {{OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language}},
author = {Parida, Shantipriya and Sahoo, Shashikanta and Sekhar, Sambit and Sahoo, Kalyanamalini and Kotwal, Ketan and Khosla, Sonal and Dash, Satya Ranjan and Bose, Aneesh and Kohli, Guneet Singh and Lenka, Smruti Smita and Bojar, Ondřej},
year = {2025},
note = {Accepted at the IndoNLP Workshop at COLING 2025} }
Contributions
- Shantipriya Parida, Silo AI, Helsinki, Finland
- Shashikanta Sahoo, Government college of Engineering Kalahandi,India
- Sambit Sekhar, Odia Generative AI, India
- Satya Rankan Dash, KIIT University, India
- Kalyanamalini Sahoo, University of Artois, France
- Sonal Khosla, Odia Generative AI, India
- Aneesh Bose, Microsoft, India
- Guneet Singh Kohli, GreyOrange, India
- Ketan Kotwal, Idiap Research Institute, Switzerland
- Smruti Smita Lenka, Odia Generative AI, India
- Ondřej Bojar, UFAL, Charles University, Prague, Czech Republic
Point of Contact:
Shantipriya Parida, and Sambit Sekhar
- Downloads last month
- 40