Model type 'mistral' is not supported.

#9
by Rishu9401 - opened

from ctransformers import AutoModelForCausalLM

llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yarn-Mistral-7B-128k-GGUF", model_file="yarn-mistral-7b-128k.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)

RuntimeError: Failed to create LLM 'mistral' from '/home/rishabh/.cache/huggingface/hub/models--TheBloke--Yarn-Mistral-7B-128k-GGUF/blobs/92ca72b3932c07376eba8d3c67f9890aaaf6f5714dd9f5e1af18193f110f7d93'.

Why are you using "TheBloke/Yarn-Mistral-7B-128k-GGUF" instead of "TheBloke/Mistral-7B-Instruct-v0.1-GGUF" ?

No specific reason. Just exploring the models and the kind of result that they provide. Will using the instruct model solve the issue?

This comment has been hidden

Okay, I haven't try this model, try to post you question there : https://huggingface.co/TheBloke/Yarn-Mistral-7B-128k-GGUF/discussions.

Unfortunately, you will have the same problem with the "Instruct" version of the model ...

Sign up or log in to comment