Error loading model

#1
by kopsahlong - opened

Hi there! I'm trying to download this model using the code provided by "Use in Transformers".

from transformers import AutoProcessor, AutoModelForCausalLM

processor = AutoProcessor.from_pretrained("liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3")
model = AutoModelForCausalLM.from_pretrained("liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3")

I'm currently getting an error when I run this code:

OSError: liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3/main' for available files.

When I just run this line model = AutoModelForCausalLM.from_pretrained("liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3"), I get this error:

ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

I'm using transformers version 4.36.2.

Any thoughts on what might be going wrong here and how to fix it?

Thanks in advance for the help!

Update that I tried updating my transformers version to the following commit: pip install git+https://github.com/huggingface/transformers.git@cae78c46, but am now getting the following error:

  File "/home/kopsahlong/miniconda3/envs/test2/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 926, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "/home/kopsahlong/miniconda3/envs/test2/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 632, in __getitem__
    raise KeyError(key)
KeyError: 'llava'

this is just a pretrained projector and is not the full model.

if you want to use the latest transformers to directly load the model, please use : https://huggingface.co/llava-hf/llava-1.5-7b-hf or https://huggingface.co/llava-hf/llava-1.5-13b-hf

otherwise, use our code base and https://huggingface.co/collections/liuhaotian/llava-15-653aac15d994e992e2677a7e

thanks.

Sign up or log in to comment