Single GPU时:No module named 'transformers_modules.moss-moon-003-sft-int4.custom_autotune'

#4
by xiabo0816 - opened
root@bogon ~/m/MOSS (main)# python3 
Python 3.9.16 (main, Mar  8 2023, 14:00:05)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("/root/moss/moss-moon-003-sft-int4", trust_remote_code=True)
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
>>> model = AutoModelForCausalLM.from_pretrained("/root/moss/moss-moon-003-sft-int4", trust_remote_code=True).half().cuda()
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 466, in from_pretrained
    return model_class.from_pretrained(
  File "/root/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2498, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/moss-moon-003-sft-int4/modeling_moss.py", line 608, in __init__
    self.quantize(config.wbits, config.groupsize)
  File "/root/.cache/huggingface/modules/transformers_modules/moss-moon-003-sft-int4/modeling_moss.py", line 732, in quantize
    from .quantization import quantize_with_gptq
  File "/root/.cache/huggingface/modules/transformers_modules/moss-moon-003-sft-int4/quantization.py", line 8, in <module>
    from .custom_autotune import *
ModuleNotFoundError: No module named 'transformers_modules.moss-moon-003-sft-int4.custom_autotune'
>

已经git cloneMOSS.git,也cd进来,还是报:

ModuleNotFoundError: No module named 'transformers_modules.moss-moon-003-sft-int4.custom_autotune'

是仍然有文件放错位置了吗?

在多加了一个斜线/之后:

model = AutoModelForCausalLM.from_pretrained("/root/moss/moss-moon-003-sft-int4/", trust_remote_code=True).half().cuda()

报错改变了:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 466, in from_pretrained
    return model_class.from_pretrained(
  File "/root/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2498, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/modeling_moss.py", line 608, in __init__
    self.quantize(config.wbits, config.groupsize)
  File "/root/.cache/huggingface/modules/transformers_modules/modeling_moss.py", line 732, in quantize
    from .quantization import quantize_with_gptq
  File "/root/.cache/huggingface/modules/transformers_modules/quantization.py", line 8, in <module>
    from .custom_autotune import *
ModuleNotFoundError: No module named 'transformers_modules.custom_autotune'

Same error

我直接把 custom_autotune.py Copy到"/root/.cache/huggingface/modules/transformers_modules/"

试试这个

import sys
sys.path.append('/root/.cache/huggingface/modules')

我用的是Tesla P40,换成fnlp/moss-moon-003-sft就很正常了,感觉是显卡和INT4不太搭?

试试这个

import sys
sys.path.append('/root/.cache/huggingface/modules')

我这里尝试了之后,还是不行😢

我直接把 custom_autotune.py Copy到"/root/.cache/huggingface/modules/transformers_modules/"

我这里尝试了之后,也还是不行

我直接把 custom_autotune.py Copy到"/root/.cache/huggingface/modules/transformers_modules/"

我这里尝试了之后,也还是不行

https://github.com/yhyu13/MOSS/blob/dev/local_setup.sh

Sign up or log in to comment