VLM2Vec-LLaVa-Next / README.md
ziyjiang's picture
Update README.md
1710b8a verified
|
raw
history blame
4.81 kB
metadata
license: apache-2.0
datasets:
  - TIGER-Lab/MMEB-train
language:
  - en
base_model:
  - llava-hf/llava-v1.6-mistral-7b-hf
library_name: transformers

A new checkpoint trained using llava-v1.6-mistral-7b-hf with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.

This repo contains the code and data for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks. In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model. The core idea is to append an [EOS] token at the end of the input sequence, which serves as the representation for the combined multimodal inputs.

Github

Data

Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training. Our results on 36 evaluation datasets are:

Train/Eval Data

Experimental Results

VLM2Vec-LlaVa-Next could outperform the baselines and other version of VLM2Vec by a large margin.

image/png

How to use VLM2Vec-LlaVa-Next

First you can clone our github

git clone https://github.com/TIGER-AI-Lab/VLM2Vec.git

Then you can enter the directory to run the following command.

from src.model import MMEBModel from src.arguments import ModelArguments from src.utils import load_processor

import torch from transformers import HfArgumentParser, AutoProcessor from PIL import Image import numpy as np

model_args = ModelArguments( model_name='TIGER-Lab/VLM2Vec-Full', pooling='last', normalize=True, model_backbone='llava')

model = MMEBModel.load(model_args) model.eval() model = model.to('cuda', dtype=torch.bfloat16)

processor = load_processor(model_args)

Image + Text -> Text

inputs = processor('<image_1|> Represent the given image with the following question: What is in the image', [Image.open('figures/example.jpg')]) inputs = {key: value.to('cuda') for key, value in inputs.items()} qry_output = model(qry=inputs)["qry_reps"]

string = 'A cat and a dog' inputs = processor(string) inputs = {key: value.to('cuda') for key, value in inputs.items()} tgt_output = model(tgt=inputs)["tgt_reps"] print(string, '=', model.compute_similarity(qry_output, tgt_output))

A cat and a dog = tensor([[0.2969]], device='cuda:0', dtype=torch.bfloat16)

string = 'A cat and a tiger' inputs = processor(string) inputs = {key: value.to('cuda') for key, value in inputs.items()} tgt_output = model(tgt=inputs)["tgt_reps"] print(string, '=', model.compute_similarity(qry_output, tgt_output))

A cat and a tiger = tensor([[0.2080]], device='cuda:0', dtype=torch.bfloat16)

Text -> Image

inputs = processor('Find me an everyday image that matches the given caption: A cat and a dog.',) inputs = {key: value.to('cuda') for key, value in inputs.items()} qry_output = model(qry=inputs)["qry_reps"]

string = '<|image_1|> Represent the given image.' inputs = processor(string, [Image.open('figures/example.jpg')]) inputs = {key: value.to('cuda') for key, value in inputs.items()} tgt_output = model(tgt=inputs)["tgt_reps"] print(string, '=', model.compute_similarity(qry_output, tgt_output))

<|image_1|> Represent the given image. = tensor([[0.3105]], device='cuda:0', dtype=torch.bfloat16)

inputs = processor('Find me an everyday image that matches the given caption: A cat and a tiger.',) inputs = {key: value.to('cuda') for key, value in inputs.items()} qry_output = model(qry=inputs)["qry_reps"]

string = '<|image_1|> Represent the given image.' inputs = processor(string, [Image.open('figures/example.jpg')]) inputs = {key: value.to('cuda') for key, value in inputs.items()} tgt_output = model(tgt=inputs)["tgt_reps"] print(string, '=', model.compute_similarity(qry_output, tgt_output))

<|image_1|> Represent the given image. = tensor([[0.2158]], device='cuda:0', dtype=torch.bfloat16)


## Citation

@article{jiang2024vlm2vec, title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks}, author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu}, journal={arXiv preprint arXiv:2410.05160}, year={2024} }