odia_vqa_en_odi_set / README.md
shantipriya's picture
Update README.md
06ff057 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: message
      list:
        - name: content
          dtype: string
        - name: role
          dtype: string
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 5779284964.76
      num_examples: 44486
    - name: validation
      num_bytes: 708420205.08
      num_examples: 5560
    - name: test
      num_bytes: 744693334.836
      num_examples: 5562
  download_size: 3060451737
  dataset_size: 7232398504.676001
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - question-answering
  - translation
  - image-to-text
language:
  - or
  - en
pretty_name: Odia_VQA_Multimodal_Dataset
size_categories:
  - 10K<n<100K

Dataset Card for Odia_VQA_Multimodal_Dataset

Dataset Summary

This dataset contains 27K English-Odia parallel instruction sets (Question-Answer) and 6k unique images. The dataset is useful for multimodal Visual Question-answering (VQA) in Odia and English. The instruction set format allows a multimodal large language model (LLM) fine-tuning.

The Odia data was annotated by the local language speakers and verified by linguists.

Supported Tasks and Leaderboards

Large Language Model (LLM)

Languages

Odia, English

Dataset Structure

JSON

Licensing Information

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Citation Information

If you use the OVQA dataset, please consider giving 👏 and cite the following paper:

@inproceedings{parida2025ovqa,
title = {{OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language}},
author = {Parida, Shantipriya and Sahoo, Shashikanta and Sekhar, Sambit and Sahoo, Kalyanamalini and Kotwal, Ketan and Khosla, Sonal and Dash, Satya Ranjan and Bose, Aneesh and Kohli, Guneet Singh and Lenka, Smruti Smita and Bojar, Ondřej},
year = {2025},
note = {Accepted at the IndoNLP Workshop at COLING 2025} }

Contributions

  • Shantipriya Parida, Silo AI, Helsinki, Finland
  • Shashikanta Sahoo, Government college of Engineering Kalahandi,India
  • Sambit Sekhar, Odia Generative AI, India
  • Satya Rankan Dash, KIIT University, India
  • Kalyanamalini Sahoo, University of Artois, France
  • Sonal Khosla, Odia Generative AI, India
  • Aneesh Bose, Microsoft, India
  • Guneet Singh Kohli, GreyOrange, India
  • Ketan Kotwal, Idiap Research Institute, Switzerland
  • Smruti Smita Lenka, Odia Generative AI, India
  • Ondřej Bojar, UFAL, Charles University, Prague, Czech Republic

Point of Contact:

Shantipriya Parida, and Sambit Sekhar