TariqJamil commited on
Commit
05cbbbb
1 Parent(s): cb54996

Added eval loss description

Browse files

TrainOutput(global_step=20, training_loss=1.1999100148677826, metrics={'train_runtime': 14159.3937, 'train_samples_per_second': 0.003, 'train_steps_per_second': 0.001, 'total_flos': 1396298876682240.0, 'train_loss': 1.1999100148677826, 'epoch': 0.04})

training_args = TrainingArguments(
per_device_train_batch_size = 32, # adjust as per vram of GPU
per_device_eval_batch_size=4,
auto_find_batch_size=True,
gradient_accumulation_steps=1,
num_train_epochs=1,
max_steps=20,
fp16=False,
bf16=False,
group_by_length=True,
optim="paged_adamw_8bit",
learning_rate=7e-4,
weight_decay=0.001,
max_grad_norm=0.3,
warmup_ratio = 0.03,
lr_scheduler_type = 'linear',
output_dir=OUTPUT_DIR,
save_steps=1,
logging_steps=1,
save_strategy= 'steps',
evaluation_strategy='steps',
save_total_limit=3,
report_to = 'tensorboard',
load_best_model_at_end=True,
)

# Set supervised fine-tuning parameters
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
eval_dataset=eval_dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=None,
tokenizer=tokenizer,
args=training_args,
packing=False,
)

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,4 +18,4 @@ The following `bitsandbytes` quantization config was used during training:
18
  ### Framework versions
19
 
20
 
21
- - PEFT 0.5.0.dev0
 
18
  ### Framework versions
19
 
20
 
21
+ - PEFT 0.5.0.dev0