The code for using the model is broken
from transformers import pipeline
pipe = pipeline("text-generation", model="Lin-Chen/ShareGPT4V-7B")
config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.17k/1.17k [00:00<00:00, 11.6MB/s]
Traceback (most recent call last):
File "", line 1, in
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/init.py", line 751, in pipeline
config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1050, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 748, in getitem
raise KeyError(key)
KeyError: 'share4v'
I have same problemοΌ do you solve it?
I have the same question. I have tried the different versions of Transformer, but it did not work.