Edit model card

image/png

Model Card for SpaceLLaVA

SpaceLlama3.1 uses llama3.1-8B as the llm backbone along with the fused DINOv2+SigLIP features of prismatic-vlms.

Model Details

Uses a full fine-tune on the spacellava dataset designed with VQASynth to enhance spatial reasoning as in SpatialVLM.

Model Description

This model uses data synthesis techniques and publically available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models. With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create VQA dataset for spatial reasoning.

  • Developed by: remyx.ai
  • Model type: MultiModal Model, Vision Language Model, Prismatic-vlms, Llama 3.1
  • Finetuned from model: Llama 3.1

Model Sources

Citation

@article{chen2024spatialvlm,
  title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
  author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
  journal = {arXiv preprint arXiv:2401.12168},
  year = {2024},
  url = {https://arxiv.org/abs/2401.12168},
}

@inproceedings{karamcheti2024prismatic,
  title = {Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models},
  author = {Siddharth Karamcheti and Suraj Nair and Ashwin Balakrishna and Percy Liang and Thomas Kollar and Dorsa Sadigh},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = {2024},
}
Downloads last month
2
Safetensors
Model size
8.33B params
Tensor type
BF16
·
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train remyxai/SpaceLlama3.1

Collection including remyxai/SpaceLlama3.1