error converting Qwen/Qwen1.5-MoE-A2.7B-Chat

#31
by Aryanne - opened
Error: Error converting to fp16: b'Traceback (most recent call last):\n File "/home/user/app/llama.cpp/convert-hf-to-gguf.py", line 2443, in \n main()\n File "/home/user/app/llama.cpp/convert-hf-to-gguf.py", line 2423, in main\n model_class = Model.from_model_architecture(hparams["architectures"][0])\n File "/home/user/app/llama.cpp/convert-hf-to-gguf.py", line 215, in from_model_architecture\n raise NotImplementedError(f'Architecture {arch!r} not supported!') from None\nNotImplementedError: Architecture 'Qwen2MoeForCausalLM' not supported!\n'

is it unsupported?
Qwen/Qwen1.5-MoE-A2.7B-Chat

ggml.ai org

oh, okay thanks, will close this

Aryanne changed discussion status to closed

Sign up or log in to comment