报错了

#4
by ytcheng - opened

Could not load model shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit with any of the following classes: (<class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'>,). See the original errors: while loading with LlamaForCausalLM, an error is thrown: Traceback (most recent call last): File "/src/transformers/src/transformers/pipelines/base.py", line 279, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/transformers/src/transformers/modeling_utils.py", line 3236, in from_pretrained raise EnvironmentError( OSError: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

Sorry for the previous misleading usage section.

We have updated our usage section. Could you please try again following the latest usage instructions?

https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit#2-usage

我是说在hugging face的Inference API这里报错了
image.png

这个我们目前没有部署。

如果想在线体验我们的模型,可以通过以下链接:
https://huggingface.co/spaces/llamafactory/Llama3-8B-Chinese-Chat

Sign up or log in to comment