Error

#1
by streamerbtw1002 - opened

'Get' error

Nice, might need a little bit more info to help you with that one.

Screenshot_20240105_150839_Brave.jpg

Oh, no idea. I don't usually post models that are small enough to even use that feature. Let me know if you figure it out though!

I get the same error on my colab machine:

Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/llmtuner/train/tuner.py", line 27, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/usr/local/lib/python3.10/dist-packages/llmtuner/train/sft/workflow.py", line 29, in run_sft
model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args, training_args.do_train)
File "/usr/local/lib/python3.10/dist-packages/llmtuner/model/loader.py", line 87, in load_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3374, in from_pretrained
if metadata.get("format") == "pt":
AttributeError: 'NoneType' object has no attribute 'get'

Seems some metadata is missing?

something related to the safetensors format I guess. It seems the code doesn't recognize the pytorch format

To fix it:

import safetensors
from safetensors.torch import save_file

tensors = dict()
with safetensors.safe_open(safetensors_path, framework="pt") as f:
    for key in f.keys():
        tensors[key] = f.get_tensor(key)

save_file(tensors, safetensors_path, metadata={'format': 'pt'})

cf: https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2#65752144412ee70185d49ff5

Interesting find. I converted the pytorch bin to safetensors using a script - and while the resulting file is able to be used directly for inference in exllamav2, and is able to be successfully quantized, I did notice that it failed to load when trying to do further training (and I had to use the pytorch_model.bin instead for my finetune).

EDIT: For reference, this was the script used for conversion:

import torch
import argparse, os, glob, sys
from safetensors.torch import save_file

parser = argparse.ArgumentParser(description="Convert .bin/.pt files to .safetensors")
parser.add_argument("input_files", nargs='+', type=str, help="Input file(s)")
args = parser.parse_args()

for file in args.input_files:
    print(f" -- Loading {file}...")
    state_dict = torch.load(file, map_location="cpu")

    out_file = os.path.splitext(file)[0] + ".safetensors"
    print(f" -- Saving {out_file}...")
    save_file(state_dict, out_file)

Which appears to not save the metadata.

I've uploaded a new safetensors file converted using fixed script that should write the metadata. Going to close this issue.

Doctor-Shotgun changed discussion status to closed

Sign up or log in to comment