Errors while using this with Oobabooga

#3
by Ameenroayan - opened

Hey !

at first it started with giving me errors that it was not finding certain checkpoints, so i cloned everything in the folder, but then i started getting errors beyond my depth :

image.png

Traceback (most recent call last):
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 442, in load_state_dict
return torch.load(checkpoint_file, map_location=“cpu”)
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\torch\serialization.py”, line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\torch\serialization.py”, line 1172, in _load
result = unpickler.load()
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\torch\serialization.py”, line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\torch\serialization.py”, line 1112, in load_tensor
storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage)._typed_storage()._untyped_storage
RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 238551040 bytes.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 446, in load_state_dict
if f.read(7) == “version”:
File “E:\AI\oobabooga-windows\installer_files\env\lib\encodings\cp1252.py”, line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: ‘charmap’ codec can’t decode byte 0x81 in position 1850: character maps to

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “E:\AI\oobabooga-windows\text-generation-webui\server.py”, line 96, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “E:\AI\oobabooga-windows\text-generation-webui\modules\models.py”, line 186, in load_model
model = LoaderClass.from_pretrained(checkpoint, **params)
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2795, in from_pretrained
) = cls._load_pretrained_model(
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 3109, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File “E:\AI\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 458, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for ‘models\digitous_Alpacino30b\pytorch_model-00032-of-00033.bin’ at ‘models\digitous_Alpacino30b\pytorch_model-00032-of-00033.bin’. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

I think you're running out of memory to load the model. It's about 24 GB of memory to load which is probably more than you have. You could always try using the 4bit.safetensors version.

I'm getting a very similar error on a 3090 Ti

Loading digitous_Alpacino30b...
Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 442, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\torch\serialization.py", line 791, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\torch\serialization.py", line 271, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\torch\serialization.py", line 252, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models\digitous_Alpacino30b\pytorch_model-00001-of-00033.bin'
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\tools\OogaBooga\text-generation-webui\server.py", line 914, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\tools\OogaBooga\text-generation-webui\modules\models.py", line 84, in load_model
model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=trust_remote_code)
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2795, in from_pretrained
) = cls._load_pretrained_model(
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 3109, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "C:\tools\OogaBooga\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 445, in load_state_dict
with open(checkpoint_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'models\digitous_Alpacino30b\pytorch_model-00001-of-00033.bin'
Done!

In OogaBooga, cut and paste the model.bins and index json out of the folder somewhere else; it seems like it is trying to load the model's full weights ignoring the quantized model that fits fine in 24gb VRAM.

I have 64GB but I have the same problem and I do not understand what exactly I should cut and where...

Sign up or log in to comment