Edit model card

Llama3-Chat_Vector-kor_llava

I have implemented a Korean LLAVA model referring to the models created by Beomi, who made the Korean Chat Vector LLAVA model, and Toshi456, who made the Japanese Chat Vector LLAVA model.

Reference Models:

  1. beomi/Llama-3-KoEn-8B-xtuner-llava-preview(https://huggingface.co/beomi/Llama-3-KoEn-8B-xtuner-llava-preview)
  2. toshi456/chat-vector-llava-v1.5-7b-ja(https://huggingface.co/toshi456/chat-vector-llava-v1.5-7b-ja)
  3. xtuner/llava-llama-3-8b-transformers

Citation

@misc {Llama3-Chat_Vector-kor_llava,
    author       = { {nebchi} },
    title        = { Llama3-Chat_Vector-kor_llava },
    year         = 2024,
    url          = { https://huggingface.co/nebchi/Llama3-Chat_Vector-kor_llava },
    publisher    = { Hugging Face }
}

Seoul City

Running the model on GPU

import requests
from PIL import Image

import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration, TextStreamer

model_id = "nebchi/Llama3-Chat_Vector-kor_llava"

model = LlavaForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype='auto', 
    device_map='auto',
    revision='a38aac3', 
)

processor = AutoProcessor.from_pretrained(model_id)

tokenizer = processor.tokenizer
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
streamer = TextStreamer(tokenizer)

prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\n์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.<|eot_id|>"
          "<|start_header_id|>assistant<|end_header_id|>\n\n์ด ์ด๋ฏธ์ง€์—๋Š”")
image_file = "https://search.pstatic.net/common/?src=http%3A%2F%2Fblogfiles.naver.net%2FMjAyNDAzMzFfNDYg%2FMDAxNzExODMzNTM4MTU5.Xf8te7rReNi4aXtFAsjjdeCsXDv1Tr4Be5pOsuofd0Mg.i8UclMMaD91i0MEMEXXKsgloQKZQbJfVJQeqK_2UC8Yg.PNG%2F359d2185%25A3%25ADc597%25A3%25AD49a3%25A3%25ADb102%25A3%25ADdf25158be59f.png&type=sc960_832"

raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)

output = model.generate(
    **inputs,
    max_new_tokens=512,
    do_sample=True,  
    eos_token_id=terminators,
    no_repeat_ngram_size=3, 
    temperature=0.7,  
    top_p=0.9,  
    streamer=streamer
)
print(processor.decode(output[0][2:], skip_special_tokens=False))

results

์ด ์ด๋ฏธ์ง€์—๋Š” ๋„์‹œ์˜ ๋ชจ์Šต์ด ์ž˜ ๋ณด์—ฌ์ง‘๋‹ˆ๋‹ค. ๋„์‹œ ๋‚ด๋ถ€์—๋Š” ์—ฌ๋Ÿฌ ๊ฑด๋ฌผ๊ณผ ๊ฑด๋ฌผ๋“ค์ด ์žˆ๊ณ , ๋„์‹œ๋ฅผ ์—ฐ๊ฒฐํ•˜๋Š” ๋„๋กœ์™€ ๊ตํ†ต ์‹œ์Šคํ…œ์ด ์ž˜ ๋ฐœ๋‹ฌ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋„์‹œ์˜ ํŠน์ง•์€ ๋†’๊ณ  ๊ด‘๋ฒ”์œ„ํ•œ ๊ฑด๋ฌผ๋“ค๊ณผ ๊ตํ†ต๋ง์„ ๊ฐ–์ถ˜ ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค.
Downloads last month
0
Safetensors
Model size
8.36B params
Tensor type
BF16
ยท
Unable to determine this modelโ€™s pipeline type. Check the docs .