Image-to-Text
Transformers
Safetensors
English
llava
pretraining
YouLiXiya commited on
Commit
e6c1b7f
1 Parent(s): 1e4fd0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -1
README.md CHANGED
@@ -1,3 +1,115 @@
1
  ---
2
- license: llama2
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: image-to-text
5
+ inference: false
6
+ arxiv: 2304.08485
7
+ license: apache-2.0
8
  ---
9
+ # LLaVA Model Card
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png)
12
+
13
+ Below is the model card of TinyLlava model 1.1b.
14
+
15
+ Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XtdA_UoyNzqiEYVR-iWA-xmit8Y2tKV2#scrollTo=DFVZgElEQk3x)
16
+
17
+ Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit)
18
+
19
+ ## Model details
20
+
21
+ **Model type:**
22
+ TinyLLaVA is an open-source chatbot trained by fine-tuning TinyLlama on GPT-generated multimodal instruction-following data.
23
+ It is an auto-regressive language model, based on the transformer architecture.
24
+
25
+ **Paper or resources for more information:**
26
+ https://llava-vl.github.io/
27
+
28
+ ## How to use the model
29
+
30
+ First, make sure to have `transformers >= 4.35.3`.
31
+ The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
32
+
33
+ ### Using `pipeline`:
34
+
35
+ Below we used [`"YouLiXiya/tinyllava-v1.0-1.1b-hf"`](https://huggingface.co/YouLiXiya/tinyllava-v1.0-1.1b-hf) checkpoint.
36
+
37
+ ```python
38
+ from transformers import pipeline
39
+ from PIL import Image
40
+ import requests
41
+
42
+ model_id = "YouLiXiya/tinyllava-v1.0-1.1b-hf"
43
+ pipe = pipeline("image-to-text", model=model_id)
44
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
45
+
46
+ image = Image.open(requests.get(url, stream=True).raw)
47
+ prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
48
+
49
+ outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
50
+ print(outputs)
51
+ >>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
52
+ ```
53
+
54
+ ### Using pure `transformers`:
55
+
56
+ Below is an example script to run generation in `float16` precision on a GPU device:
57
+
58
+ ```python
59
+ import requests
60
+ from PIL import Image
61
+
62
+ import torch
63
+ from transformers import AutoProcessor, LlavaForConditionalGeneration
64
+
65
+ model_id = "YouLiXiya/tinyllava-v1.0-1.1b-hf"
66
+
67
+ prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
68
+ image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
69
+
70
+ model = LlavaForConditionalGeneration.from_pretrained(
71
+ model_id,
72
+ torch_dtype=torch.float16,
73
+ low_cpu_mem_usage=True,
74
+ ).to(0)
75
+
76
+ processor = AutoProcessor.from_pretrained(model_id)
77
+
78
+ raw_image = Image.open(requests.get(image_file, stream=True).raw)
79
+ inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
80
+
81
+ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
82
+ print(processor.decode(output[0][2:], skip_special_tokens=True))
83
+ ```
84
+
85
+ ### Model optimization
86
+
87
+ #### 4-bit quantization through `bitsandbytes` library
88
+
89
+ First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
90
+
91
+ ```diff
92
+ model = LlavaForConditionalGeneration.from_pretrained(
93
+ model_id,
94
+ torch_dtype=torch.float16,
95
+ low_cpu_mem_usage=True,
96
+ + load_in_4bit=True
97
+ )
98
+ ```
99
+
100
+ #### Use Flash-Attention 2 to further speed-up generation
101
+
102
+ First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
103
+
104
+ ```diff
105
+ model = LlavaForConditionalGeneration.from_pretrained(
106
+ model_id,
107
+ torch_dtype=torch.float16,
108
+ low_cpu_mem_usage=True,
109
+ + use_flash_attention_2=True
110
+ ).to(0)
111
+ ```
112
+
113
+ ## License
114
+ Llama 2 is licensed under the LLAMA 2 Community License,
115
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.