Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)


RoViQA: Visual-Question-Answering model created by combining Roberta and ViT

This repository contains the code for RoViQA, a Visual Question Answering (VQA) model that combines image features extracted using Vision Transformer (ViT) and text features extracted using RoBERTa. The project includes training, inference, and various utility scripts.
github: https://github.com/Tro-fish/RoViQA-Visual_Question_Answering

Model Architecture

Description of the image

RoViQA Overview

RoViQA is a Visual Question Answering (VQA) model that leverages the power of Vision Transformer (ViT) and RoBERTa to understand and answer questions about images. By combining the strengths of these two models, RoViQA can effectively process and interpret both visual and textual information to provide accurate answers.

Model parameter

  • Base Models
    • Roberta-base: 110M parameters
    • ViT-base: 86M parameters
  • RoViQA: 215M parameters
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .