Finetuning using Huggingface

#3
by Saugatkafley - opened

Any method of fine-tuning using Huggingface Trainers ?

#https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
Hello, have a look at a snippet that should work:-
from transformers import TrainingArguments, Trainer
training_directory = "nli-few-shot/mnli-v2xl/"

train_args = TrainingArguments( output_dir=f'./results/{training_directory}', overwrite_output_dir=True, save_steps=10_000, save_total_limit=2, learning_rate=3e-6, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, #warmup_steps=0, # 1000, warmup_ratio=0.06, #0.1, 0.06 weight_decay=0.1, #0.1, fp16=True, fp16_full_eval=True, seed=42, prediction_loss_only=True, )

Any particular issues you are facing?

I wanted to fine-tune this on lora. the training failed. with unknown issues.

Owner

please at least attach the code and error.

Sign up or log in to comment