Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ library_name: transformers
|
|
11 |
|
12 |
A new checkpoint trained using [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.
|
13 |
|
14 |
-
This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model.
|
15 |
|
16 |
## Github
|
17 |
- [Github](https://github.com/TIGER-AI-Lab/VLM2Vec)
|
|
|
11 |
|
12 |
A new checkpoint trained using [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.
|
13 |
|
14 |
+
This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model.
|
15 |
|
16 |
## Github
|
17 |
- [Github](https://github.com/TIGER-AI-Lab/VLM2Vec)
|