Oobabooga Error einops import rearrange ModuleNotFoundError: No module named 'einops' Press any key to continue . . .

#1
by Goldenblood56 - opened

Windows 10
RTX 4080 using fully updated Oobabooga at time of posting this.

Arguments command line
"call python server.py --auto-devices --chat --trust-remote-code --model OccamRazor_mpt-7b-storywriter-4bit-128g --wbits 4 --groupsize 128 --model_type llama "

Error

Starting the web UI...
INFO:Gradio HTTP request redirected to localhost :)
WARNING:trust_remote_code is enabled. This is dangerous.
INFO:Loading OccamRazor_mpt-7b-storywriter-4bit-128g...
INFO:Found the following quantized model: models\OccamRazor_mpt-7b-storywriter-4bit-128g\model.safetensors
Traceback (most recent call last):
File "C:\AI\oobabooga-windowsBest\text-generation-webui\server.py", line 872, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\AI\oobabooga-windowsBest\text-generation-webui\modules\models.py", line 159, in load_model
model = load_quantized(model_name)
File "C:\AI\oobabooga-windowsBest\text-generation-webui\modules\GPTQ_loader.py", line 179, in load_quantized
model = load_quant(str(path_to_model), str(pt_path), shared.args.wbits, shared.args.groupsize, kernel_switch_threshold=threshold)
File "C:\AI\oobabooga-windowsBest\text-generation-webui\modules\GPTQ_loader.py", line 45, in load_quant
model = AutoModelForCausalLM.from_config(config, trust_remote_code=shared.args.trust_remote_code)
File "C:\AI\oobabooga-windowsBest\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 407, in from_config
model_class = get_class_from_dynamic_module(config.name_or_path, module_file + ".py", class_name, **kwargs)
File "C:\AI\oobabooga-windowsBest\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 388, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "C:\AI\oobabooga-windowsBest\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 157, in get_class_in_module
module = importlib.import_module(module_path)
File "C:\AI\oobabooga-windowsBest\installer_files\env\lib\importlib_init
.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\Mainuser/.cache\huggingface\modules\transformers_modules\OccamRazor_mpt-7b-storywriter-4bit-128g\modeling_mpt.py", line 13, in
from .attention import attn_bias_shape, build_attn_bias
File "C:\Users\Mainuser/.cache\huggingface\modules\transformers_modules\OccamRazor_mpt-7b-storywriter-4bit-128g\attention.py", line 7, in
from einops import rearrange
ModuleNotFoundError: No module named 'einops'
Press any key to continue . . .

What I have tried

  1. pip install einops
    Here is what I got.
    C:\AI\oobabooga-windowsBest\text-generation-webui\models\OccamRazor_mpt-7b-storywriter-4bit-128g>pip install einops
    Requirement already satisfied: einops in c:\users\Mainuser\appdata\local\programs\python\python310\lib\site-packages (0.6.1)

It still does not work? Anyone have an suggestions? Thanks

Are you sure you pip installed in the correct environment? running cmd_windows.bat or cmd_linux.hs will take you to the correct place

Same problem here,
Ubuntu 22.04.2 LTS

(textgen) user@host:$ pip install einops
Requirement already satisfied: einops in ./.conda/envs/textgen/lib/python3.10/site-packages (0.6.1)
(textgen) user@host:$ pip list | grep einops
einops 0.6.1

It was made for KoboldAI (4bit-fork).

So your basically saying that I need to download KobaltAI if I want to run this? And if I did want to get it working on Oobabooga I guess it's up to someone else like the develops of Oobabooga to incorporate support somehow? If that is the case I understand and thank you.

On line 51 of your start_windows.bat add the line:
call pip install einops
Then run it again. Once you're done you can remove the line for future launches.

@TheNitzel Thank you! That worked!

Thanks that worked for me too TheNitzel. But now I get this? lol

What I have tried so far? Updating "ooba" that's it really no clue what else to do.
Windows 10
RTX 4080

Arguments command line
"call python server.py --auto-devices --chat --trust-remote-code --model OccamRazor_mpt-7b-storywriter-4bit-128g --wbits 4 --groupsize 128 --model_type llama "

Starting the web UI...
INFO:Gradio HTTP request redirected to localhost :)
WARNING:trust_remote_code is enabled. This is dangerous.
INFO:Loading OccamRazor_mpt-7b-storywriter-4bit-128g...
INFO:Found the following quantized model: models\OccamRazor_mpt-7b-storywriter-4bit-128g\model.safetensors
C:\Users\xxxxx/.cache\huggingface\modules\transformers_modules\OccamRazor_mpt-7b-storywriter-4bit-128g\attention.py:148: UserWarning: Using attn_impl: torch. If your model does not use alibi or prefix_lm we recommend using attn_impl: flash otherwise we recommend using attn_impl: triton.
warnings.warn('Using attn_impl: torch. If your model does not use alibi or ' + 'prefix_lm we recommend using attn_impl: flash otherwise ' + 'we recommend using attn_impl: triton.')
You are using config.init_device='cpu', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.

Any ideas? Thanks.

Sign up or log in to comment