error when triying to finetune model

#12
by wirytiox - opened

i am triying to finetune the model with pert (i don't know if there is other way that would work in my hardware)
and i cannot get it to work, this is my code snippet that is stopping execution:

MODEL_ID="TheBloke/WizardLM-7B-uncensored-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
#model = AutoModelForCausalLM.from_pretrained(MODEL_ID, )

base_model = AutoModelForCausalLM.from_pretrained(MODEL_ID,device_map="cuda:0")


peft_model = PeftModel.from_pretrained(
    base_model,
    MODEL_ID,
    subfolder="loftq_init",
    is_trainable=True,
)
file_path = r"C:\Users\juanj\Desktop\konosuba-dialogue.txt"
config = LoraConfig(
    r=16,
    lora_alpha=32,
    target_modules=["query_key_value"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

dataset = TextDataset(tokenizer=tokenizer, file_path=file_path, block_size=512)
model = get_peft_model(base_model, config)
model.print_trainable_parameters()

and i get this error

ValueError: Can't find 'adapter_config.json' at 'TheBloke/WizardLM-7B-uncensored-GPTQ'

Sign up or log in to comment