求助:跑webDemo抛异常
请问以下的问题应该如何解决?尝试了调整model = AutoModel.from_pretrained("chatglm-6b", trust_remote_code=True).half().cuda() 显存大小,异常并无变化。
使用的腾讯云的GPU T4 服务器,同样的chatglm-6b执行在本地笔记本能正常启动,但是在云端抛异常。安装步骤如下:
1.git clone https://github.com/THUDM/ChatGLM-6B
2. 进入到下载后的ChatGLM-6B文件夹中
3.命令行中执行:pip install -r requirements.txt 安装相关依赖
4.安装 Gradio:pip install gradio
5.下载模型:在ChatGLM-6B-main路径下,命令行中执行 git clone https://huggingface.co/THUDM/chatglm-6b 这里在腾讯云秒级完成,应该是用了镜像文件。
6. 替换chatGLM6B/config.json中模型路径THUDM/chatglm-6b为 chatglm-6b,将web_demo.py中所有的THUDM/chatglm-6b 替换成 chatglm-6b
7. 执行python web_demo.py,异常堆栈如下:
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/home/ubuntu/ChatGLM-6B/web_demo.py", line 5, in
tokenizer = AutoTokenizer.from_pretrained("chatglm-6b", trust_remote_code=True)
File "/home/ubuntu/anaconda3/envs/chatGLM/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/ubuntu/anaconda3/envs/chatGLM/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "/home/ubuntu/anaconda3/envs/chatGLM/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 221, in init
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 64, in init
self.text_tokenizer = TextTokenizer(vocab_file)
File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 22, in init
self.sp.Load(model_path)
File "/home/ubuntu/anaconda3/envs/chatGLM/lib/python3.10/site-packages/sentencepiece/init.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/home/ubuntu/anaconda3/envs/chatGLM/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]