Mediocreatmybest commited on
Commit
235cd53
1 Parent(s): 72c3467

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -2
README.md CHANGED
@@ -1,5 +1,168 @@
1
  ---
2
- library_name: transformers
 
 
 
 
 
 
3
  pipeline_tag: image-to-text
4
  ---
5
- 8-Bit Saved version from https://huggingface.co/Salesforce/blip2-opt-2.7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ tags:
5
+ - vision
6
+ - image-to-text
7
+ - image-captioning
8
+ - visual-question-answering
9
  pipeline_tag: image-to-text
10
  ---
11
+
12
+ Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
13
+ _8-bit / fp4 / float16 / Safetensors_
14
+ 🥱_- Mediocre_
15
+
16
+
17
+ # BLIP-2, OPT-2.7b, pre-trained only
18
+
19
+ BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
20
+ It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
21
+
22
+ Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
23
+
24
+ ## Model description
25
+
26
+ BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
27
+
28
+ The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
29
+ while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
30
+ which bridge the gap between the embedding space of the image encoder and the large language model.
31
+
32
+ The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
33
+
34
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
35
+ alt="drawing" width="600"/>
36
+
37
+ This allows the model to be used for tasks like:
38
+
39
+ - image captioning
40
+ - visual question answering (VQA)
41
+ - chat-like conversations by feeding the image and the previous conversation as prompt to the model
42
+
43
+ ## Direct Use and Downstream Use
44
+
45
+ You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
46
+ fine-tuned versions on a task that interests you.
47
+
48
+ ## Bias, Risks, Limitations, and Ethical Considerations
49
+
50
+ BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
51
+
52
+ > Like other large language models for which the diversity (or lack thereof) of training
53
+ > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
54
+ > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
55
+ > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
56
+ > large language models.
57
+ >
58
+ BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
59
+
60
+ BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
61
+
62
+
63
+ ### How to use
64
+
65
+ For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
66
+
67
+ #### Running the model on CPU
68
+
69
+ <details>
70
+ <summary> Click to expand </summary>
71
+
72
+ ```python
73
+ import requests
74
+ from PIL import Image
75
+ from transformers import Blip2Processor, Blip2ForConditionalGeneration
76
+
77
+ processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
78
+ model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
79
+
80
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
81
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
82
+
83
+ question = "how many dogs are in the picture?"
84
+ inputs = processor(raw_image, question, return_tensors="pt")
85
+
86
+ out = model.generate(**inputs)
87
+ print(processor.decode(out[0], skip_special_tokens=True))
88
+ ```
89
+ </details>
90
+
91
+ #### Running the model on GPU
92
+
93
+ ##### In full precision
94
+
95
+ <details>
96
+ <summary> Click to expand </summary>
97
+
98
+ ```python
99
+ # pip install accelerate
100
+ import requests
101
+ from PIL import Image
102
+ from transformers import Blip2Processor, Blip2ForConditionalGeneration
103
+
104
+ processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
105
+ model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
106
+
107
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
108
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
109
+
110
+ question = "how many dogs are in the picture?"
111
+ inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
112
+
113
+ out = model.generate(**inputs)
114
+ print(processor.decode(out[0], skip_special_tokens=True))
115
+ ```
116
+ </details>
117
+
118
+ ##### In half precision (`float16`)
119
+
120
+ <details>
121
+ <summary> Click to expand </summary>
122
+
123
+ ```python
124
+ # pip install accelerate
125
+ import torch
126
+ import requests
127
+ from PIL import Image
128
+ from transformers import Blip2Processor, Blip2ForConditionalGeneration
129
+
130
+ processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
131
+ model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
132
+
133
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
134
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
135
+
136
+ question = "how many dogs are in the picture?"
137
+ inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
138
+
139
+ out = model.generate(**inputs)
140
+ print(processor.decode(out[0], skip_special_tokens=True))
141
+ ```
142
+ </details>
143
+
144
+ ##### In 8-bit precision (`int8`)
145
+
146
+ <details>
147
+ <summary> Click to expand </summary>
148
+
149
+ ```python
150
+ # pip install accelerate bitsandbytes
151
+ import torch
152
+ import requests
153
+ from PIL import Image
154
+ from transformers import Blip2Processor, Blip2ForConditionalGeneration
155
+
156
+ processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
157
+ model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
158
+
159
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
160
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
161
+
162
+ question = "how many dogs are in the picture?"
163
+ inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
164
+
165
+ out = model.generate(**inputs)
166
+ print(processor.decode(out[0], skip_special_tokens=True))
167
+ ```
168
+ </details>