bconsolvo commited on
Commit
752d436
1 Parent(s): 6ee2ff6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +208 -3
README.md CHANGED
@@ -1,5 +1,210 @@
1
- # WIP
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- We have not yet converted the weights of this model to the HuggingFace LLaVA format.
4
 
5
- This model card will be updated when we do.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: gemma
5
+ license_name: gemma-terms
6
+ license_link: https://ai.google.dev/gemma/terms
7
+ base_model: google/gemma-7b-it
8
+ tags:
9
+ - LLM
10
+ - MMFM
11
+ - Intel
12
+ model-index:
13
+ - name: llava-gemma-7b
14
+ results:
15
+ - task:
16
+ type: Large Language Model
17
+ name: Large Language Model
18
+ metrics:
19
+ - type: GQA
20
+ name: GQA
21
+ value: 0.472
22
+ - type: MME Cog.
23
+ name: MME Cog.
24
+ value: 254
25
+ - type: MME Per.
26
+ name: MME Per.
27
+ value: 895
28
+ - type: MM-Vet
29
+ name: MM-Vet
30
+ value: 18.2
31
+ - type: POPE Acc.
32
+ name: POPE Acc.
33
+ value: 0.848
34
+ - type: POPE F1
35
+ name: POPE F1
36
+ value: 0.829
37
+ - type: VQAv2
38
+ name: VQAv2
39
+ value: 68.7
40
+ - type: MMVP
41
+ name: MMVP
42
+ value: 0.327
43
+ - type: ScienceQA Image
44
+ name: ScienceQA Image
45
+ value: 0.625
46
+ library_name: transformers
47
+ pipeline_tag: image-text-to-text
48
+ ---
49
 
50
+ ## Model Details: LLaVA-Gemma-7b
51
 
52
+ `llava-gemma-7b` is a large multimodal model (LMM) trained using the [LLaVA-v1.5 framework](https://arxiv.org/abs/2310.03744) with the 7-billion parameter [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) model as language backbone and the CLIP-based vision encoder.
53
+
54
+ **_NOTE:_** As of 06/03/2024, we have not yet converted the weights of this model to the HuggingFace LLaVA format. This model card will be updated when we do.
55
+
56
+ | Model Details | Description |
57
+ | ----------- | ----------- |
58
+ | Authors | Intel: [Musashi Hinck](https://huggingface.co/musashihinck), [Matthew Olson](https://huggingface.co/matthewlyleolson), [David Cobbley](https://huggingface.co/djcobble), [Shao-Yen Tseng](https://huggingface.co/shaoyent), [Vasudev Lal](https://huggingface.co/vasudevlal) |
59
+ | Date | March 2024 |
60
+ | Version | 1 |
61
+ | Type | Large multimodal model (LMM) |
62
+ | Paper or Other Resources | [LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model](https://arxiv.org/abs/2404.01331) |
63
+ | License | [Gemma](https://ai.google.dev/gemma/terms) |
64
+ | Questions or Comments | [Community Tab](https://huggingface.co/Intel/llava-gemma-7b/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)|
65
+
66
+ This model card was created by [Benjamin Consolvo](https://huggingface.co/bconsolvo) and the authors listed above.
67
+
68
+ ## Intended Use
69
+
70
+ | Intended Use | Description |
71
+ | ----------- | ----------- |
72
+ | Primary intended uses | The model has been finetuned for multimodal benchmark evaluations, but can also be used as a multimodal chatbot. |
73
+ | Primary intended users | Anyone using or evaluating multimodal models. |
74
+ | Out-of-scope uses | This model is not intended for uses that require high levels of factuality, high stakes situations, mental health or medical applications, generating misinformation or disinformation, impersonating others, facilitating or inciting harassment or violence, any use that could lead to the violation of a human right under the UN Declaration of Human Rights. |
75
+
76
+ ### How to use
77
+
78
+ Currently, using `llava-gemma` requires a [modified preprocessor](./processing_llavagemma.py). _We are currently working on modifying the `LlavaProcessor` class to streamline usage (see [PR #30030](https://github.com/huggingface/transformers/pull/30030)). Expect updates soon._
79
+
80
+ For current usage, see [`usage.py`](./usage.py) or the following code block:
81
+
82
+ ```python
83
+ import requests
84
+ from PIL import Image
85
+ from transformers import (
86
+ LlavaForConditionalGeneration,
87
+ AutoTokenizer,
88
+ CLIPImageProcessor
89
+ )
90
+ from processing_llavagemma import LlavaGemmaProcessor # This is in this repo
91
+
92
+ checkpoint = "Intel/llava-gemma-7b"
93
+
94
+ # Load model
95
+ model = LlavaForConditionalGeneration.from_pretrained(checkpoint)
96
+ processor = LlavaGemmaProcessor(
97
+ tokenizer=AutoTokenizer.from_pretrained(checkpoint),
98
+ image_processor=CLIPImageProcessor.from_pretrained(checkpoint)
99
+ )
100
+
101
+ # Prepare inputs
102
+ # Use gemma chat template
103
+ prompt = processor.tokenizer.apply_chat_template(
104
+ [{'role': 'user', 'content': "<image>\nWhat's the content of the image?"}],
105
+ tokenize=False,
106
+ add_generation_prompt=True
107
+ )
108
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
109
+ image = Image.open(requests.get(url, stream=True).raw)
110
+ inputs = processor(text=prompt, images=image, return_tensors="pt")
111
+
112
+ # Generate
113
+ generate_ids = model.generate(**inputs, max_length=30)
114
+ output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
115
+ print(output)
116
+ ```
117
+
118
+ For straightforward use as a chatbot (without images), you can modify the last portion of code to the following:
119
+
120
+ ```python
121
+ # Prepare inputs
122
+ # Use gemma chat template
123
+ prompt = processor.tokenizer.apply_chat_template(
124
+ [{'role': 'user', 'content': "Summarize the following paragraph? In this paper, we introduced LLaVA-Gemma, a compact vision-language model leveraging the Gemma Large Language Model in two variants, Gemma-2B and Gemma-7B. Our work provides a unique opportunity for researchers to explore the trade-offs between computational efficiency and multimodal understanding in small-scale models. The availability of both variants allows for a comparative analysis that sheds light on how model size impacts performance in various tasks. Our evaluations demonstrate the versatility and effectiveness of LLaVA-Gemma across a range of datasets, highlighting its potential as a benchmark for future research in small-scale vision-language models. With these models, future practitioners can optimize the performance of small-scale multimodal models more directly."}],
125
+ tokenize=False,
126
+ add_generation_prompt=True
127
+ )
128
+ # url = "https://www.ilankelman.org/stopsigns/australia.jpg"
129
+ # image = Image.open(requests.get(url, stream=True).raw)
130
+ inputs = processor(text=prompt, images=None, return_tensors="pt")
131
+
132
+ # Generate
133
+ generate_ids = model.generate(**inputs, max_length=300)
134
+ output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
135
+ print(output)
136
+ ```
137
+
138
+ ## Factors
139
+
140
+ | Factors | Description |
141
+ | ----------- | ----------- |
142
+ | Groups | - |
143
+ | Instrumentation | - |
144
+ | Environment | Trained for 4 hours on 8 Intel Gaudi 2 AI accelerators. |
145
+ | Card Prompts | Model training and deployment on alternate hardware and software will change model performance |
146
+
147
+ ## Metrics
148
+
149
+ | Metrics | Description |
150
+ | ----------- | ----------- |
151
+ | Model performance measures | We evaluate the LlaVA-Gemma models on a similar collection of benchmarks to other LMM works: GQA; MME; MM-Vet; POPE (accuracy and F1); VQAv2; MMVP; the image subset of ScienceQA. Our experiments provide insights into the efficacy of various design choices within the LLaVA framework. |
152
+ | Decision thresholds | - |
153
+ | Approaches to uncertainty and variability | - |
154
+
155
+ ## Training Data
156
+
157
+ The model was trained using the LLaVA-v1.5 data mixture. This is listed as follows:
158
+
159
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
160
+ - 158K GPT-generated multimodal instruction-following data.
161
+ - 450K academic-task-oriented VQA data mixture.
162
+ - 40K ShareGPT data.
163
+
164
+ ## Quantitative Analyses
165
+
166
+ Performance of LLaVA-Gemma models across seven benchmarks. Highlighted box indicates strongest performance amongst LLaVA-Gemma models. Bottom two rows show self-reported performance of Llava Phi-2 and LLaVA-v1.5 respectively. The bolded **gemma-7b-it** is the current model used here in this model card.
167
+
168
+ | LM Backbone | Vision Model | Pretrained Connector | GQA | MME cognition | MME perception | MM-Vet | POPE accuracy | POPE F1 | VQAv2 | ScienceQA Image | MMVP |
169
+ | ----------- | ------------ | -------------------- | ----- | ------------- | -------------- | ------ | ------------- | ------- | ----- | --------------- | ----- |
170
+ | gemma-2b-it | CLIP | Yes | 0.531 | 236 | 1130 | 17.7 | 0.850 |<mark>0.839</mark>| 70.65 | 0.564 | 0.287 |
171
+ | gemma-2b-it | CLIP | No | 0.481 | 248 | 935 | 13.1 | 0.784 | 0.762 | 61.74 | 0.549 | 0.180 |
172
+ | gemma-2b-it | DinoV2 | Yes |<mark>0.587</mark>| 307| <mark>1133</mark> |<mark>19.1</mark>| <mark>0.853</mark> | 0.838 |<mark>71.37</mark>| 0.555 | 0.227 |
173
+ | gemma-2b-it | DinoV2 | No | 0.501 | <mark>309</mark>| 959 | 14.5 | 0.793 | 0.772 | 61.65 | 0.568 | 0.180 |
174
+ | | | | | | | | | | | | |
175
+ | **gemma-7b-it** | CLIP | Yes | 0.472 | 253 | 895 | 18.2 | 0.848 | 0.829 | 68.7 | 0.625 | <mark>0.327</mark> |
176
+ | gemma-7b-it | CLIP | No | 0.472 | 278 | 857 | 19.1 | 0.782 | 0.734 | 65.1 | <mark>0.636</mark> | 0.240 |
177
+ | gemma-7b-it | DinoV2 | Yes | 0.519 | 257 | 1021 | 14.3 | 0.794 | 0.762 | 65.2 | 0.628 | <mark>0.327</mark> |
178
+ | gemma-7b-it | DinoV2 | No | 0.459 | 226 | 771 | 12.2 | 0.693 | 0.567 | 57.4 | 0.598 | 0.267 |
179
+ | | | | | | | | | | | | |
180
+ | Phi-2b | CLIP | Yes | - | - | 1335 | 28.9 | - | 0.850 | 71.4 | 0.684 | - |
181
+ | Llama-2-7b | CLIP | Yes | 0.620 | 348 | 1511 | 30.6 | 0.850 | 0.859 | 78.5 | 0.704 | 46.1 |
182
+
183
+ ## Ethical Considerations
184
+
185
+ Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
186
+
187
+ | Ethical Considerations | Description |
188
+ | ----------- | ----------- |
189
+ | Data | The model was trained using the LLaVA-v1.5 data mixture as described above. |
190
+ | Human life | The model is not intended to inform decisions central to human life or flourishing. |
191
+ | Mitigations | No additional risk mitigation strategies were considered during model development. |
192
+ | Risks and harms | This model has not been assessed for harm or biases, and should not be used for sensitive applications where it may cause harm. |
193
+ | Use cases | - |
194
+
195
+ ## Caveats and Recommendations
196
+
197
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
198
+
199
+ ## Citation details
200
+ ```bibtex
201
+ @misc{hinck2024llavagemma,
202
+ title={LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model},
203
+ author={Musashi Hinck and Matthew L. Olson and David Cobbley and Shao-Yen Tseng and Vasudev Lal},
204
+ year={2024},
205
+ eprint={2404.01331},
206
+ url={https://arxiv.org/abs/2404.01331},
207
+ archivePrefix={arXiv},
208
+ primaryClass={cs.CL}
209
+ }
210
+ ```