RaushanTurganbay HF staff commited on
Commit
bcef42b
1 Parent(s): 66d62be

added colab

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -16,6 +16,8 @@ arxiv: 2408.03326
16
 
17
  ![image/png](llava_onevision_arch.png)
18
 
 
 
19
  Below is the model card of 0.5B LLaVA-Onevision model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-si).
20
 
21
 
@@ -56,8 +58,11 @@ Below we used [`"llava-hf/llava-onevision-qwen2-0.5b-ov-hf"`](https://huggingfac
56
  from transformers import pipeline
57
  from PIL import Image
58
  import requests
 
 
59
 
60
  model_id = "llava-hf/llava-onevision-qwen2-0.5b-ov-hf"
 
61
  pipe = pipeline("image-to-text", model=model_id)
62
  url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
63
  image = Image.open(requests.get(url, stream=True).raw)
 
16
 
17
  ![image/png](llava_onevision_arch.png)
18
 
19
+ Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-4AtYjR8UMtCALV0AswU1kiNkWCLTALT?usp=sharing)
20
+
21
  Below is the model card of 0.5B LLaVA-Onevision model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-si).
22
 
23
 
 
58
  from transformers import pipeline
59
  from PIL import Image
60
  import requests
61
+ from transformers import AutoProcessor
62
+
63
 
64
  model_id = "llava-hf/llava-onevision-qwen2-0.5b-ov-hf"
65
+ processor = AutoProcessor.from_pretrained(model_id)
66
  pipe = pipeline("image-to-text", model=model_id)
67
  url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
68
  image = Image.open(requests.get(url, stream=True).raw)