File size: 5,022 Bytes
c549dc5
9e8a430
c549dc5
 
 
 
 
 
 
 
 
 
778df6c
a4babd7
bbc648e
778df6c
 
bbc648e
a4babd7
ea027c3
bbc648e
 
 
f0297ee
636781c
e924776
a4babd7
4b13120
 
a4babd7
4b13120
 
a4babd7
 
 
 
4b13120
a4babd7
4b13120
a4babd7
 
 
e924776
 
4b13120
e924776
a4babd7
 
 
4b13120
e924776
a4babd7
e924776
a4babd7
 
4b13120
a4babd7
 
4b13120
a4babd7
 
 
 
 
791d1ce
 
a4babd7
 
 
16d1f50
 
 
 
6bc6443
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
title: KB-VQA
emoji: πŸ”₯
colorFrom: gray
colorTo: blue
sdk: streamlit
sdk_version: 1.29.0
app_file: app.py
pinned: false
license: apache-2.0
---

------

# Demonstration Environment
The project demo app can be accessed from the developed [**KB-VQA HF Space**](https://huggingface.co/spaces/m7mdal7aj/KB-VQA), and the  entire code can be accessed from [here](https://huggingface.co/spaces/m7mdal7aj/KB-VQA/tree/main).
To run the demo app locally, from the root of the local code repository run `streamlit run app.py`. This will run the whole app. However, to use the **Run Inference Tool**, a GPU is required. 

## Project File Structure
Each main python module of the project is extensively documented to guide the reader on what the module role is and how to use it along with its corresponding classes and functions.

Below is the overall file structure of the project:

<pre>
KB-VQA
β”œβ”€β”€ Files: Various files required for the demo such as samples images, dissertation report ..etc.
β”œβ”€β”€ models
β”‚   β”œβ”€β”€ deformable-detr-detic: DETIC Object Detection Model.
β”‚   β”œβ”€β”€ yolov5: YOLOv5 Object Detection Model.baseline)
β”œβ”€β”€ my_model
β”‚   β”œβ”€β”€ KBVQA.py : This module is the central component for implementing the designed model architecture for the Knowledge-Based Visual Question Answering (KB-VQA) project.
β”‚   β”œβ”€β”€ state_manager.py: Manages the user interface and session state to facilitate the Run Inference tool of the Streamlit demo app.
β”‚   β”œβ”€β”€ LLAMA2
β”‚   β”‚   β”œβ”€β”€ LLAMA2_model.py: Used for loading LLaMA-2 model to be fine-tuned.
β”‚   β”œβ”€β”€ captioner
β”‚   β”‚   β”œβ”€β”€ image_captioning.py: Provides functionality for generating captions for images.
β”‚   β”œβ”€β”€ detector
β”‚   β”‚   β”œβ”€β”€ object_detection.py: Used to detect objects in images using object detection models.
β”‚   β”œβ”€β”€ fine_tuner
β”‚   β”‚   β”œβ”€β”€ fine_tuner.py: Main Fine-Tuning Script for LLaMa-2 Chat models.
β”‚   β”‚   β”œβ”€β”€ fine_tuning_data_handler.py: Handles and prepares the data for fine-tuning LLaMA-2 Chat models.
β”‚   β”‚   β”œβ”€β”€ fine_tuning_data
β”‚   β”‚   β”‚   β”œβ”€β”€fine_tuning_data_detic.csv: Fine-tuning data prepared by the prompt engineering module using DETIC detector.
β”‚   β”‚   β”‚   β”œβ”€β”€fine_tuning_data_yolov5.csv: Fine-tuning data prepared by the prompt engineering module using YOLOv5. detector.
β”‚   β”œβ”€β”€ results
β”‚   β”‚   β”œβ”€β”€ Demo_Images: Contains a pool of images used for the demo app.
β”‚   β”‚   β”œβ”€β”€ evaluation.py: Provides a comprehensive framework for evaluating the KB-VQA model.
β”‚   β”‚   β”œβ”€β”€ demo.py: Provides a comprehensive framework for visualizing and demonstrating the results of the KB-VQA evaluation.
β”‚   β”‚   β”œβ”€β”€ evaluation_results.xlsx : This file contains all the evaluation results based on the evaluation data.
β”‚   β”œβ”€β”€ tabs
β”‚   β”‚   β”œβ”€β”€ home.py: Displays an introduction to the application with brief background along with the demo tools description.
β”‚   β”‚   β”œβ”€β”€ results.py: Manages the interactive Streamlit demo for visualizing model evaluation results and analysis.
β”‚   β”‚   β”œβ”€β”€ run_inference.py: Responsible for the 'run inference' tool to test and use the fine-tuned models.
β”‚   β”‚   β”œβ”€β”€ model_arch.py: Displays the model architecture and accompanying abstract and design details
β”‚   β”‚   β”œβ”€β”€ dataset_analysis.py: Provides tools for visualizing dataset analyses.
β”‚   β”œβ”€β”€ utilities
β”‚   β”‚   β”œβ”€β”€ ui_manager.py: Manages the user interface for the Streamlit application, handling the creation and navigation of various tabs.
β”‚   β”‚   β”œβ”€β”€ gen_utilities.py: Provides a collection of utility functions and classes commonly used across various parts
β”‚   β”œβ”€β”€ config (All Configurations files are kept separated and stored as ".py" for easy reading - this will change after the project submission.)
β”‚   β”‚   β”œβ”€β”€ kbvqa_config.py: Configuration parameters for the main KB-VQA model.
β”‚   β”‚   β”œβ”€β”€ LLAMA2_config.py: Configuration parameters for LLaMA-2 model.
β”‚   β”‚   β”œβ”€β”€ captioning_config.py : Configuration parameters for the captioning model (InstructBLIP).
β”‚   β”‚   β”œβ”€β”€ dataset_config.py: Configuration parameters for the dataset processing.
β”‚   β”‚   β”œβ”€β”€ evaluation_config.py: Configuration parameters for the KB-VQA model evaluation.
β”‚   β”‚   β”œβ”€β”€ fine_tuning_config.py: Configuration parameters for the fine-tuning nodule.
β”‚   β”‚   β”œβ”€β”€ inference_config.py: Configuration parameters for the Run Inference tool in the demo app.
β”œβ”€β”€ app.py: main entry point for streamlit - first page in the streamlit app)
β”œβ”€β”€ README.md (readme - this file)
β”œβ”€β”€ requirements.txt: Requirements file for the whole project that includes all the requirements for running the demo app on the HuggingFace space environment.
</pre>



**Author: [**Mohammed Bin Ali Alhaj**](https://www.linkedin.com/in/m7mdal7aj)**