Use new trained of LoRA model that completely unable to generate content related to the customized data

#2
by longquan - opened

We merged the [izumi-lab/llm-japanese-dataset-vanilla] dataset with the 40,000 pieces of customized data, and then trained the LoRA model using the [meta-llama/Llama-2-7b-chat-hf] model, with the hyperparameters listed below.
{
"auto_mapping": null,
"base_model_name_or_path": "meta-llama/Llama-2-7b-chat-hf",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"lora_alpha": 16,
"lora_dropout": 0.05,
"modules_to_save": null,
"peft_type": "LORA",
"r": 8,
"revision": null,
"target_modules": [
"q_proj",
"v_proj"
],
"task_type": "CAUSAL_LM"
}
batch_size: int = 128,
micro_batch_size: int = 4,
num_epochs: int = 1,
learning_rate: float = 3e-4,
cutoff_len: int = 256,
val_set_size: int = 2000,

Testing the new trained model revealed that it was completely unable to generate content related to the customized data, and it seems that the customized data was not trained into the LoRA model.
The validation loss for LoRA model training drops very smoothly to around 0.58 (training log summary table below)

{
"best_metric": 0.5886363387107849,
"best_model_checkpoint": "./lora-alpaca_0821/checkpoint-1200",
"epoch": 0.5269730612468951,
"global_step": 1200,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
{
"epoch": 0.53,
"learning_rate": 0.0001484152503445108,
"loss": 0.3999,
"step": 1200
},
{
"epoch": 0.53,
"eval_loss": 0.5886363387107849,
"eval_runtime": 102.4819,
"eval_samples_per_second": 19.516,
"eval_steps_per_second": 2.439,
"step": 1200
}
],
"max_steps": 2277,
"num_train_epochs": 1,
"total_flos": 1.1445327441611981e+18,
"trial_name": null,
"trial_params": null
}

{'eval_loss': 0.5886363387107849, 'eval_runtime': 102.4819, 'eval_samples_per_second': 19.516, 'eval_steps_per_second': 2.439, 'epoch': 0.53}

How can I fit the custom data into the LoRA model?
Any help you can give would be greatly appreciated!

longquan changed discussion title from use new trainedcompletely unable to generate content related to the customized data to use new trained LoRA model that completely unable to generate content related to the customized data
longquan changed discussion title from use new trained LoRA model that completely unable to generate content related to the customized data to Use new trained of LoRA model that completely unable to generate content related to the customized data

Sign up or log in to comment