TypeError: mistral isn't supported yet.

#4
by Ayenem - opened
    model = AutoGPTQForCausalLM.from_quantized(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../auto_gptq/modeling/auto.py", line 87, in from_quantized
    model_type = check_and_get_model_type(model_name_or_path, trust_remote_code)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../auto_gptq/modeling/_utils.py", line 149, in check_and_get_model_type
    raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: mistral isn't supported yet.

I followed instructions from https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ#how-to-use-this-gptq-model-from-python-code (I tried installations from wheel and from source).
Is mistral not runnable with autoGPTQ yet?

(same problem with https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ)

Yeah, it’s not supported with autogptq. You gotta do it with exllama or transformers itself.

Thank you for the reply.
Would you happen to know where I can find instructions for that?

Instructions are in the README - there's a Transformers Python example, and instructions for using text-generation-webui (which supports ExLlama), and Text Generation Inference

Sign up or log in to comment