Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
tags:
|
5 |
+
- vision
|
6 |
+
- image-to-text
|
7 |
+
pipeline_tag: image-to-text
|
8 |
+
---
|
9 |
+
|
10 |
+
# BLIP-2, OPT-6.7b, fine-tuned on COCO
|
11 |
+
|
12 |
+
BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
|
13 |
+
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
|
14 |
+
|
15 |
+
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
16 |
+
|
17 |
+
## Model description
|
18 |
+
|
19 |
+
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
|
20 |
+
|
21 |
+
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
|
22 |
+
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
|
23 |
+
which bridge the gap between the embedding space of the image encoder and the large language model.
|
24 |
+
|
25 |
+
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
|
26 |
+
|
27 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
|
28 |
+
alt="drawing" width="600"/>
|
29 |
+
|
30 |
+
This allows the model to be used for tasks like:
|
31 |
+
|
32 |
+
- image captioning
|
33 |
+
- visual question answering (VQA)
|
34 |
+
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
|
35 |
+
|
36 |
+
## Intended uses & limitations
|
37 |
+
|
38 |
+
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
|
39 |
+
fine-tuned versions on a task that interests you.
|
40 |
+
|
41 |
+
### How to use
|
42 |
+
|
43 |
+
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/blip_2).
|