KeyError: 'mixtral'

#9
by meetrais - opened

I tried to run below code but it returns "KeyError: 'mixtral'" error. Would appreciate your help.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

I am running into the same error, can anyone help here.

Please make sure to run this with the latest transformers version

That solves my issue, thanks!

"transformers_version": "4.36.0.dev0",

Please make sure to run this with the latest transformers version

BTW: pip install -U transformers

I am using transformers==4.37.2 which is (at the time of writing) the latest stable version and I get this key error.

If you're using Colab, make sure to restart your environment after upgrading. It should work afterwards

It was a mix up in my containers locally and remotely. Updating did in fact fixed the issue as advertised. πŸŽ‰

care to explain that? im experiencing the same issue. what containers did you do what with and where?

Sure @AimeeAsimov , I am using a custom vLLM server in a docker container. I built it originally using vLLM 0.2.0 and an old version of transformers. All I needed to solve the issue was to rebuild with the container using vLLM 0.3.0 and the latetst version of transformers.

My mistake was that I was using old version of the container on the cloud. Once I noticed, I pushed the new version of my image and updated it on the cloud and everything is working smoothly since then.

Sign up or log in to comment