Transformers
Safetensors
English
Chinese
llava
pretraining
vision-language
llm
lmm
Inference Endpoints
bczhou commited on
Commit
fd01149
1 Parent(s): 064e744

create model card

Browse files
Files changed (1) hide show
  1. README.md +55 -1
README.md CHANGED
@@ -7,4 +7,58 @@ language:
7
  - en
8
  - zh
9
  library_name: transformers
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - en
8
  - zh
9
  library_name: transformers
10
+ ---
11
+
12
+ **Model type:**
13
+ TinyLLaVA, a tiny model (1.4B) trained using the exact recipe of [llava](https://github.com/haotian-liu/LLaVA).
14
+ We trained our TinyLLaVA using [TinyLlama](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) as our LLM backbone, and [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) as our vision backbone.
15
+
16
+ **Model use:**
17
+ The weights have been converted to hf format.
18
+
19
+ ## How to use the model
20
+
21
+ First, make sure to have `transformers >= 4.35.3`.
22
+ The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
23
+
24
+ ### Using `pipeline`:
25
+
26
+ Below we used [`"bczhou/tiny-llava-v1-hf"`](https://huggingface.co/bczhou/tiny-llava-v1-hf) checkpoint.
27
+
28
+ ```python
29
+ from transformers import pipeline
30
+ from PIL import Image
31
+ import requests
32
+ model_id = "bczhou/tiny-llava-v1-hf"
33
+ pipe = pipeline("image-to-text", model=model_id)
34
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
35
+ image = Image.open(requests.get(url, stream=True).raw)
36
+ prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
37
+ outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
38
+ print(outputs)
39
+ >>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"}
40
+ ```
41
+
42
+ ### Using pure `transformers`:
43
+
44
+ Below is an example script to run generation in `float16` precision on a GPU device:
45
+
46
+ ```python
47
+ import requests
48
+ from PIL import Image
49
+ import torch
50
+ from transformers import AutoProcessor, LlavaForConditionalGeneration
51
+ model_id = "bczhou/tiny-llava-v1-hf"
52
+ prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
53
+ image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
54
+ model = LlavaForConditionalGeneration.from_pretrained(
55
+ model_id,
56
+ torch_dtype=torch.float16,
57
+ low_cpu_mem_usage=True,
58
+ ).to(0)
59
+ processor = AutoProcessor.from_pretrained(model_id)
60
+ raw_image = Image.open(requests.get(image_file, stream=True).raw)
61
+ inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
62
+ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
63
+ print(processor.decode(output[0][2:], skip_special_tokens=True))
64
+ ```