Sample code gives error 'KeyError: 'mistral''

#4
by SteveC - opened

KeyError Traceback (most recent call last)
in <cell line: 8>()
6 from transformers import pipeline
7
----> 8 pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16, device_map="auto")
9
10 # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating

2 frames
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in getitem(self, key)
708 return self._extra_content[key]
709 if key not in self._mapping:
--> 710 raise KeyError(key)
711 value = self._mapping[key]
712 module_name = model_type_to_module_name(key)

Im tryin to load it through FastChat and i have the same key Error: mistral

Basically I see this for every mistral derived model!

Try updating to the latest version of transformers.

pip install --upgrade transformers accelerate

See also: https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/discussions/9#652a5cc6375b3a8bc158a6af

Sign up or log in to comment