Edit model card

UTI2_L3_1000steps_1e5rate_05beta_CSFTDPO

This model is a fine-tuned version of tsavage68/UTI_L3_1000steps_1e5rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6931
  • Rewards/chosen: 0.0
  • Rewards/rejected: 0.0
  • Rewards/accuracies: 0.0
  • Rewards/margins: 0.0
  • Logps/rejected: 0.0
  • Logps/chosen: 0.0
  • Logits/rejected: -1.1794
  • Logits/chosen: -1.1794

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6931 0.3333 25 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 0.6667 50 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 1.0 75 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 1.3333 100 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 1.6667 125 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 2.0 150 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 2.3333 175 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 2.6667 200 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 3.0 225 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 3.3333 250 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 3.6667 275 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 4.0 300 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 4.3333 325 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 4.6667 350 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 5.0 375 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 5.3333 400 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 5.6667 425 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 6.0 450 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 6.3333 475 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 6.6667 500 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 7.0 525 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 7.3333 550 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 7.6667 575 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 8.0 600 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 8.3333 625 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 8.6667 650 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 9.0 675 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 9.3333 700 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 9.6667 725 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 10.0 750 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 10.3333 775 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 10.6667 800 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 11.0 825 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 11.3333 850 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 11.6667 875 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 12.0 900 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 12.3333 925 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 12.6667 950 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 13.0 975 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794
0.6931 13.3333 1000 0.6931 0.0 0.0 0.0 0.0 0.0 0.0 -1.1794 -1.1794

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.0+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
8.03B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from