remyx
salma-remyx commited on
Commit
8ebd9a0
·
2 Parent(s): f60e53d 22ec3da

Merge branch 'main' of https://huggingface.co/remyxai/SpaceMinitron-4B into main

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ datasets:
4
+ - remyxai/vqasynth_spacellava
5
+ ---
6
+
7
+ # Model Card for SpaceMinitron-4B
8
+
9
+ **SpaceMinitron-4B** uses [Minitron-4B-Base](https://huggingface.co/nvidia/Minitron-4B-Base) as the llm backbone along with the fused DINOv2+SigLIP features of [prismatic-vlms](https://github.com/TRI-ML/prismatic-vlms).
10
+
11
+
12
+ ## Model Details
13
+
14
+ Uses a full fine-tune including the [spacellava dataset](https://huggingface.co/datasets/remyxai/vqasynth_spacellava) designed with [VQASynth](https://github.com/remyxai/VQASynth/tree/main) to enhance spatial reasoning as in [SpatialVLM](https://spatial-vlm.github.io/).
15
+
16
+ ### Model Description
17
+
18
+ This model uses data synthesis techniques and publically available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models.
19
+ With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create VQA dataset for spatial reasoning.
20
+
21
+
22
+ - **Developed by:** remyx.ai
23
+ - **Model type:** MultiModal Model, Vision Language Model, Prismatic-vlms, Minitron-4B-Base
24
+ - **Finetuned from model:** Minitron-4B-Base [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
25
+
26
+ ### Model Sources
27
+ - **Dataset:** [SpaceLLaVA](https://huggingface.co/datasets/remyxai/vqasynth_spacellava)
28
+ - **Repository:** [VQASynth](https://github.com/remyxai/VQASynth/tree/main)
29
+ - **Paper:** [SpatialVLM](https://arxiv.org/abs/2401.12168)
30
+
31
+ ## Usage
32
+
33
+ Try the `run_inference.py` script to run a quick test:
34
+ ```bash
35
+ python run_inference.py --model_location remyxai/SpaceMinitron-4B
36
+ --image_source "https://remyx.ai/assets/spatialvlm/warehouse_rgb.jpg"
37
+ --user_prompt "What is the distance between the man in the red hat and the pallet of boxes?"
38
+
39
+ ```
40
+
41
+ ## Deploy
42
+ Under the `docker` directory, you'll find a dockerized Triton Server for this model. Run the following:
43
+
44
+ ```bash
45
+ docker build -f Dockerfile -t spacellava-server:latest
46
+ docker run -it --rm --gpus all -p8000:8000 -p8001:8001 -p8002:8002 --shm-size 24G spaceminitron-4B-server:latest
47
+ python3 client.py --image_path "https://remyx.ai/assets/spatialvlm/warehouse_rgb.jpg" \
48
+ --prompt "What is the distance between the man in the red hat and the pallet of boxes?"
49
+ ```
50
+
51
+ ## Citation
52
+ ```
53
+ @article{chen2024spatialvlm,
54
+ title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
55
+ author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
56
+ journal = {arXiv preprint arXiv:2401.12168},
57
+ year = {2024},
58
+ url = {https://arxiv.org/abs/2401.12168},
59
+ }
60
+
61
+ @inproceedings{karamcheti2024prismatic,
62
+ title = {Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models},
63
+ author = {Siddharth Karamcheti and Suraj Nair and Ashwin Balakrishna and Percy Liang and Thomas Kollar and Dorsa Sadigh},
64
+ booktitle = {International Conference on Machine Learning (ICML)},
65
+ year = {2024},
66
+ }
67
+ ```