Mediocreatmybest commited on
Commit
49e6b0d
1 Parent(s): ba19ee4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -2
README.md CHANGED
@@ -1,5 +1,65 @@
1
  ---
2
- library_name: transformers
 
 
 
 
 
 
3
  pipeline_tag: image-to-text
 
4
  ---
5
- 8-Bit saved version from: https://huggingface.co/Salesforce/blip2-opt-6.7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ tags:
5
+ - vision
6
+ - image-to-text
7
+ - image-captioning
8
+ - visual-question-answering
9
  pipeline_tag: image-to-text
10
+ inference: false
11
  ---
12
+
13
+ Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
14
+ _8-bit / fp4 / float16 / Safetensors_
15
+ 🥱_- Mediocre_
16
+
17
+
18
+ # BLIP-2, OPT-6.7b, pre-trained only
19
+
20
+ BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
21
+ It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
22
+
23
+ Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
24
+
25
+ ## Model description
26
+
27
+ BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
28
+
29
+ The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
30
+ while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
31
+ which bridge the gap between the embedding space of the image encoder and the large language model.
32
+
33
+ The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
34
+
35
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
36
+ alt="drawing" width="600"/>
37
+
38
+ This allows the model to be used for tasks like:
39
+
40
+ - image captioning
41
+ - visual question answering (VQA)
42
+ - chat-like conversations by feeding the image and the previous conversation as prompt to the model
43
+
44
+ ## Direct Use and Downstream Use
45
+
46
+ You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
47
+ fine-tuned versions on a task that interests you.
48
+
49
+ ## Bias, Risks, Limitations, and Ethical Considerations
50
+
51
+ BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
52
+
53
+ > Like other large language models for which the diversity (or lack thereof) of training
54
+ > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
55
+ > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
56
+ > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
57
+ > large language models.
58
+ >
59
+ BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
60
+
61
+ BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
62
+
63
+ ### How to use
64
+
65
+ For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).