--- title: KB-VQA emoji: πŸ”₯ colorFrom: gray colorTo: blue sdk: streamlit sdk_version: 1.29.0 app_file: app.py pinned: false license: apache-2.0 --- ## Project File Structure ``` KB-VQA β”œβ”€β”€ Files: Various files required for the demo such as samples images, dissertation report ..etc. β”œβ”€β”€ models | β”œβ”€β”€ deformable-detr-detic: DETIC Object Detection Model. | β”œβ”€β”€ yolov5: YOLOv5 Object Detection Model.baseline) β”œβ”€β”€ my_model | β”œβ”€β”€ KBVQA.py : This module is the central component for implementing the designed model architecture for the Knowledge-Based Visual Question Answering (KB-VQA) project. | β”œβ”€β”€ state_manager.py: Manages the user interface and session state to facilitate the Run Inference tool of the Streamlit demo app. β”‚ β”œβ”€β”€ LLAMA2 β”‚ β”‚ β”œβ”€β”€ LLAMA2_model.py: Used for loading LLaMA-2 model to be fine-tuned. β”‚ β”œβ”€β”€ captioner β”‚ β”‚ β”œβ”€β”€ image_captioning.py: Provides functionality for generating captions for images. | β”œβ”€β”€ detector β”‚ β”‚ β”œβ”€β”€ object_detection.py: Used to detect objects in images using object detection models. | β”œβ”€β”€ fine_tuner β”‚ β”‚ β”œβ”€β”€ fine_tuner.py: Main Fine-Tuning Script for LLaMa-2 Chat models. β”‚ β”‚ β”œβ”€β”€ fine_tuning_data_handler.py: Handles and prepares the data for fine-tuning LLaMA-2 Chat models. β”‚ β”‚ β”œβ”€β”€ fine_tuning_data β”‚ β”‚ β”‚ β”œβ”€β”€fine_tuning_data_detic.csv: Fine-tuning data prepared by the prompt engineering module using DETIC detector. β”‚ β”‚ β”‚ β”œβ”€β”€fine_tuning_data_yolov5.csv: Fine-tuning data prepared by the prompt engineering module using YOLOv5. detector. | β”œβ”€β”€ results β”‚ β”‚ β”œβ”€β”€ Demo_Images: Contains a pool of images used for the demo app. β”‚ β”‚ β”œβ”€β”€ evaluation.py: Provides a comprehensive framework for evaluating the KB-VQA model. β”‚ β”‚ β”œβ”€β”€ demo.py: Provides a comprehensive framework for visualizing and demonstrating the results of the KB-VQA evaluation. β”‚ β”‚ β”œβ”€β”€ evaluation_results.xlsx : This file contains all the evaluation results based on the evaluation data. | β”œβ”€β”€ tabs β”‚ β”‚ β”œβ”€β”€ home.py: Displays an introduction to the application with brief background along with the demo tools description. β”‚ β”‚ β”œβ”€β”€ results.py: Manages the interactive Streamlit demo for visualizing model evaluation results and analysis. β”‚ β”‚ β”œβ”€β”€ run_inference.py: Responsible for the 'run inference' tool to test and use the fine-tuned models. β”‚ β”‚ β”œβ”€β”€ model_arch.py: Displays the model architecture and accompanying abstract and design details β”‚ β”‚ β”œβ”€β”€ dataset_analysis.py: Provides tools for visualizing dataset analyses. | β”œβ”€β”€ utilities β”‚ β”‚ β”œβ”€β”€ ui_manager.py: Manages the user interface for the Streamlit application, handling the creation and navigation of various tabs. β”‚ β”‚ β”œβ”€β”€ gen_utilities.py: Provides a collection of utility functions and classes commonly used across various parts | β”œβ”€β”€ config (All Configurations files are kept separated and stored as ".py" for easy reading - this will change after the project submission.) β”‚ β”‚ β”œβ”€β”€ kbvqa_config.py: Configuration parameters for the main KB-VQA model. β”‚ β”‚ β”œβ”€β”€ LLAMA2_config.py: Configuration parameters for LLaMA-2 model. β”‚ β”‚ β”œβ”€β”€ captioning_config.py : Configuration parameters for the captioning model (InstructBLIP). β”‚ β”‚ β”œβ”€β”€ dataset_config.py: Configuration parameters for the dataset processing. β”‚ β”‚ β”œβ”€β”€ evaluation_config.py: Configuration parameters for the KB-VQA model evaluation. β”‚ β”‚ β”œβ”€β”€ fine_tuning_config.py: Configurable parameters for the fine-tuning nodule. β”‚ β”‚ β”œβ”€β”€ inference_config.py: Configurable parameters for the Run Inference tool in the demo app. β”œβ”€β”€ app.py: main entry point for streamlit - first page in the streamlit app) β”œβ”€β”€ README.md (readme - this file) β”œβ”€β”€ requirements.txt: Requirements file for the whole project that includes all the requirements for running the demo app on the HuggingFace space environment. ```