How to load the model?

#7
by sadabshiper - opened

I created the token from "Access Token" option. Still can't access the model from my local PC.

 
`from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
import torch
from PIL import Image
import requests

processor = LlavaNextProcessor.from_pretrained("llava-v1.6-vicuna-7b-hf", token='MyToken')

model = LlavaNextForConditionalGeneration.from_pretrained(
"llava-v1.6-vicuna-7b-hf",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
token= 'MyTokenGene'
)
model.to("cuda:0")

url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)

conversation = [
{

  "role": "user",
  "content": [
      {"type": "text", "text": "What is shown in this image?"},
      {"type": "image"},
    ],
},

]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)

inputs = processor(images=image, text=prompt, return_tensors="pt").to("cuda:0")

output = model.generate(**inputs, max_new_tokens=100)

print(processor.decode(output[0], skip_special_tokens=True))
`

Llava Hugging Face org

Can you give more details on what do you mean by Access token? These llava models do nor require any authorization, as long as you have your PC logged in to hf hub by running in the CLI huggingface-hub login

Use the whole path for the model: "llava-hf/llava-v1.6-vicuna-7b-hf" instead of "llava-v1.6-vicuna-7b-hf"

Sign up or log in to comment