How to load the som-llava model using the transformers library?
#1
by
dyliu
- opened
I attempted to use the following code, but unfortunately, it didn't work out:
model = LlavaForConditionalGeneration.from_pretrained("zzxslp/som-llava-v1.5-13b").to('cuda').eval()
processor = AutoProcessor.from_pretrained("zzxslp/som-llava-v1.5-13b")
I'm wondering if it's possible to directly load the som-llava model using the Transformers library. Is this functionality currently supported, or is it not compatible with this approach?
Check out the HF model here: https://huggingface.co/zzxslp/som-llava-v1.5-13b-hf
zzxslp
changed discussion status to
closed