OSError: MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

#5
by orby - opened

When I run this repo, I get the following error. I'm probably initializing things incorrectly?

            MODEL_NAME = "MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF"
            GGUF_FILE = "Mistral-7B-Instruct-v0.3.Q6_K.gguf"

            # Setup my models
            model = AutoModelForCausalLM.from_pretrained(
                MODEL_NAME,
                gguf_file=GGUF_FILE,
                device_map='cuda'
            )
            tokenizer = AutoTokenizer.from_pretrained(
                MODEL_NAME,
                gguf_file=GGUF_FILE,

Hi @orby
It's not the model, this is pretty new feature in transformers: https://github.com/huggingface/transformers/issues/30889

I recommend to use the Use this model button and directly use it in your favorite local LM app:
image.png

Sign up or log in to comment