File size: 2,435 Bytes
161ea72
1a055a3
161ea72
1a055a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eed77c6
1a055a3
f7ee94c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language: en
license: mit
tags:
- computer vision
- natural language processing
- vision language models
- multimodal models
pipeline_tag: image-to-text
---

# PRISM Models

All models trained as part of the paper [Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
](https://arxiv.org/abs/2402.07865) by Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, and Dorsa Sadigh. 
These models were trained in January 2024 in the open source codebase [prismatic-vlms](https://github.com/TRI-ML/prismatic-vlms). The goal
of releasing these models is to provide a thorough understanding of what design choices matter when training visually-conditioned language models 
in addition to providing a number of strong open-source VLMs for the community to build on.

## Intended use

The primary use of PRISMs are for research and development on visually-conditioned language models. The intended users are members of the machine learning and artificial intelligence research community.

## Licensing

PRISM models are released under an MIT License. Copyright (c) Toyota Research Institute, Inc. Toyota did not provide any of the materials used to train these models. They are here for reference and verification and evaluation of the training procedures described in the [paper](https://arxiv.org/abs/2402.07865) and as enabled in the [code](https://github.com/TRI-ML/prismatic-vlms). See the paper and the README in the codebase for more details.

These models are provided as-is. Toyota Research Institute disclaims all warranties, express or implied, including any warranty of merchantability and fitness for a particular purpose.

## Training Procedures

All models are trained as described in the [paper](https://arxiv.org/abs/2402.07865) using the associated [training codebase](https://github.com/TRI-ML/prismatic-vlms). The following datasets are used for training:

- All LLaVA 1.5 Training Data
- LVIS-Instruct-4V
- LRV-Instruct

## Evaluation Procedures

Models are evaluated as described in the [paper](https://arxiv.org/abs/2402.07865) using the associated [evaluation codebase](https://github.com/TRI-ML/vlm-evaluation). Evaluation datasets span a number of visual reasoning tasks including:

- General visual question answering
- Bounding box prediction
- Challenge sets which evaluate counting, identifying spatial relationships, and propensity to hallucinate