m7mdal7aj commited on
Commit
a4babd7
β€’
1 Parent(s): 88cfe85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -1
README.md CHANGED
@@ -10,4 +10,55 @@ pinned: false
10
  license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: apache-2.0
11
  ---
12
 
13
+
14
+
15
+
16
+ ## Project File Structure
17
+
18
+ ```
19
+ KB-VQA
20
+ β”œβ”€β”€ Files: Various files required for the demo such as samples images,..etc.
21
+ β”œβ”€β”€ models
22
+ | β”œβ”€β”€ deformable-detr-detic: DETIC Object Detection Model.
23
+ | β”œβ”€β”€ yolov5: YOLOv5 Object Detection Model.baseline)
24
+ β”œβ”€β”€ my_model
25
+ | β”œβ”€β”€ KBVQA.py : This module is the central component for implementing the designed model architecture for the Knowledge-Based Visual Question Answering (KB-VQA) project.
26
+ | β”œβ”€β”€ state_manager.py: Manages the user interface and session state to facilitate the Run Inference tool of the Streamlit demo app.
27
+ β”‚ β”œβ”€β”€ LLAMA2
28
+ β”‚ β”‚ β”œβ”€β”€ LLAMA2_model.py: Used for loading LLaMA-2 model to be fine-tuned.
29
+ β”‚ β”œβ”€β”€ captioner
30
+ β”‚ β”‚ β”œβ”€β”€ image_captioning.py: Provides functionality for generating captions for images.
31
+ | β”œβ”€β”€ detector
32
+ β”‚ β”‚ β”œβ”€β”€ object_detection.py: Used to detect objects in images using object detection models.
33
+ | β”œβ”€β”€ fine_tuner
34
+ β”‚ β”‚ β”œβ”€β”€ fine_tuner.py: Main Fine-Tuning Script for LLaMa-2 Chat models.
35
+ β”‚ β”‚ β”œβ”€β”€ fine_tuning_data_handler.py: Handles and prepares the data for fine-tuning LLaMA-2 Chat models.
36
+ β”‚ β”‚ β”œβ”€β”€ fine_tuning_data
37
+ β”‚ β”‚ β”‚ β”œβ”€β”€fine_tuning_data_detic.csv: Fine-tuning data prepared by the prompt engineering module using DETIC detector.
38
+ β”‚ β”‚ β”‚ β”œβ”€β”€fine_tuning_data_yolov5.csv: Fine-tuning data prepared by the prompt engineering module using YOLOv5. detector.
39
+ | β”œβ”€β”€ results
40
+ β”‚ β”‚ β”œβ”€β”€ Demo_Images: Contains a pool of images used for the demo app.
41
+ β”‚ β”‚ β”œβ”€β”€ evaluation.py: Provides a comprehensive framework for evaluating the KB-VQA model.
42
+ β”‚ β”‚ β”œβ”€β”€ demo.py: Provides a comprehensive framework for visualizing and demonstrating the results of the KB-VQA evaluation.
43
+ β”‚ β”‚ β”œβ”€β”€ evaluation_results.xlsx : This file contains all the evaluation results based on the evaluation data.
44
+ | β”œβ”€β”€ tabs
45
+ β”‚ β”‚ β”œβ”€β”€ home.py: Displays an introduction to the application with brief background along with the demo tools decription.
46
+ β”‚ β”‚ β”œβ”€β”€ results.py: Manages the interactive Streamlit demo for visualizing model evaluation results and analysis.
47
+ β”‚ β”‚ β”œβ”€β”€ run_inference.py: Responsible for the 'run inference' tool to test and use the fine-tuned models.
48
+ β”‚ β”‚ β”œβ”€β”€ model_arch.py: Displays the model architecture and accompanying abstract and design details
49
+ β”‚ β”‚ β”œβ”€β”€ dataset_analysis.py: Provides tools for visualizing dataset analyses.
50
+ | β”œβ”€β”€ utilities
51
+ β”‚ β”‚ β”œβ”€β”€ ui_manager.py: Manages the user interface for the Streamlit application, handling the creation and navigation of various tabs.
52
+ β”‚ β”‚ β”œβ”€β”€ gen_utilities.py: Provides a collection of utility functions and classes commonly used across various parts
53
+ | β”œβ”€β”€ config (All Configurations files are kept separated and stored as ".py" for easy reading - this will change after the project submission.)
54
+ β”‚ β”‚ β”œβ”€β”€ kbvqa_config.py: Configuration parameters for the main KB-VQA model.
55
+ β”‚ β”‚ β”œβ”€β”€ LLAMA2_config.py: Configuration parameters for LLaMA-2 model.
56
+ β”‚ β”‚ β”œβ”€β”€ captioning_config.py : Configuration parameters for the captioning model (InstructBLIP).
57
+ β”‚ β”‚ β”œβ”€β”€ dataset_config.py: Configuration parameters for the dataset processing.
58
+ β”‚ β”‚ β”œβ”€β”€ evaluation_config.py: Configuration parameters for the KB-VQA model evaluation.
59
+ β”‚ β”‚ β”œβ”€β”€ fine_tuning_config.py: Configurable parameters for the fine-tuning nodule.
60
+ β”‚ β”‚ β”œβ”€β”€ inference_config.py: Configurable parameters for the Run Inference tool in the demo app.
61
+ β”œβ”€β”€ app.py: main entry point for streamlit - first page in the streamlit app)
62
+ β”œβ”€β”€ README.md (readme - this file)
63
+ β”œβ”€β”€ requirements.txt: Requirements file for the whole project that includes all the requirements for running the demo app on the HuggingFace space environment.
64
+ ```