Error loading the lora adaters using peft

#1
by carlosatFroom - opened

`---------------------------------------------------------------------------

HTTPError Traceback (most recent call last)

/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py in hf_raise_for_status(response, endpoint_name)
285 try:
--> 286 response.raise_for_status()
287 except HTTPError as e:

12 frames

HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/lora_model/resolve/main/adapter_config.json

The above exception was the direct cause of the following exception:

RepositoryNotFoundError Traceback (most recent call last)

RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-659f089b-5b38a3236c61da7b5511cf28;e1730194-71a0-4efd-8fda-bd8cbd7042f2)

Repository Not Found for url: https://huggingface.co/lora_model/resolve/main/adapter_config.json.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)

/usr/local/lib/python3.10/dist-packages/peft/config.py in _get_peft_type(cls, model_id, **hf_hub_download_kwargs)
188 )
189 except Exception:
--> 190 raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'")
191
192 loaded_attributes = cls.from_json_file(config_file)

ValueError: Can't find 'adapter_config.json' at 'lora_model'`

the previous command:
model = model.merge_and_unload()
model.save_pretrained("lora_model") # Local saving

doesn't seem to save the right parameters for peft to load the adapter

Unsloth AI org

@carlosatFroom Oops sorry just got to this!!

Hmm interesting let me check and get back to you.

As a temporary solution, saving to HF can be done via:

trainer.train() FINISH TRAINING

model.save_pretrained(...) OR model.push_to_hub(...)

Then to load the model in a new instance

from peft import PeftModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/mistral-7b-bnb-4bit" # YOUR MODEL
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
model = PeftModel.from_pretrained(model, "lora_model")

DO INFERENCE HERE

Yes, it works. I assume that at some point the model had been uploaded to HF and this is why the loading worked, because it has a snapshot in HF

carlosatFroom changed discussion status to closed

Sign up or log in to comment