Edit model card

Visualize in Weights & Biases

pythia-70m_tatsu-lab_alpaca_farm_sftsd0_policy_pythia-6.9b_gold_offsetbias-8b_noise0.25_rmsd0

This model is a fine-tuned version of RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7578
  • Accuracy: 0.5255

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 0
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.025
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0 0 0.8506 0.5085
0.7856 0.0648 100 0.8417 0.5154
0.8988 0.1296 200 0.8312 0.5035
0.8425 0.1944 300 0.8257 0.5073
0.8449 0.2592 400 0.8066 0.5158
0.7617 0.3239 500 0.8017 0.5224
0.8037 0.3887 600 0.7950 0.5174
0.7883 0.4535 700 0.7927 0.5204
0.7897 0.5183 800 0.7897 0.5204
0.7966 0.5831 900 0.7776 0.5104
0.7762 0.6479 1000 0.7783 0.5116
0.7904 0.7127 1100 0.7675 0.5185
0.7542 0.7775 1200 0.7670 0.5216
0.8068 0.8422 1300 0.7714 0.5104
0.7943 0.9070 1400 0.7693 0.5166
0.7704 0.9718 1500 0.7697 0.5096
0.7728 1.0366 1600 0.7666 0.5123
0.7862 1.1014 1700 0.7671 0.5116
0.7159 1.1662 1800 0.7709 0.5158
0.7586 1.2310 1900 0.7631 0.5158
0.7751 1.2958 2000 0.7678 0.5096
0.7356 1.3605 2100 0.7665 0.5120
0.7916 1.4253 2200 0.7610 0.5093
0.7289 1.4901 2300 0.7586 0.5104
0.7283 1.5549 2400 0.7570 0.5189
0.7382 1.6197 2500 0.7611 0.5123
0.7677 1.6845 2600 0.7582 0.5220
0.7639 1.7493 2700 0.7587 0.5147
0.8038 1.8141 2800 0.7589 0.5073
0.7765 1.8788 2900 0.7631 0.5104
0.7728 1.9436 3000 0.7565 0.5108
0.774 2.0084 3100 0.7601 0.5216
0.751 2.0732 3200 0.7588 0.5231
0.7838 2.1380 3300 0.7633 0.5120
0.73 2.2028 3400 0.7583 0.5174
0.7496 2.2676 3500 0.7594 0.5212
0.7925 2.3324 3600 0.7641 0.5143
0.7393 2.3971 3700 0.7589 0.5127
0.7359 2.4619 3800 0.7586 0.5154
0.767 2.5267 3900 0.7596 0.5112
0.7707 2.5915 4000 0.7604 0.5066
0.7245 2.6563 4100 0.7575 0.5143
0.7138 2.7211 4200 0.7612 0.5162
0.7398 2.7859 4300 0.7577 0.5193
0.7363 2.8507 4400 0.7547 0.5235
0.733 2.9155 4500 0.7592 0.5181
0.7584 2.9802 4600 0.7562 0.5174
0.7846 3.0450 4700 0.7575 0.5174
0.7295 3.1098 4800 0.7576 0.5166
0.775 3.1746 4900 0.7538 0.5216
0.7513 3.2394 5000 0.7599 0.5154
0.7589 3.3042 5100 0.7601 0.5189
0.7205 3.3690 5200 0.7568 0.5174
0.7262 3.4338 5300 0.7554 0.5181
0.7971 3.4985 5400 0.7591 0.5201
0.7276 3.5633 5500 0.7579 0.5220
0.7736 3.6281 5600 0.7588 0.5135
0.8005 3.6929 5700 0.7599 0.5243
0.7843 3.7577 5800 0.7563 0.5147
0.7411 3.8225 5900 0.7595 0.5177
0.7341 3.8873 6000 0.7589 0.5197
0.7376 3.9521 6100 0.7582 0.5197
0.7611 4.0168 6200 0.7560 0.5212
0.7506 4.0816 6300 0.7570 0.5243
0.7533 4.1464 6400 0.7557 0.5177
0.7328 4.2112 6500 0.7555 0.5135
0.7443 4.2760 6600 0.7565 0.5185
0.7135 4.3408 6700 0.7576 0.5177
0.8016 4.4056 6800 0.7590 0.5100
0.7408 4.4704 6900 0.7554 0.5143
0.7693 4.5351 7000 0.7550 0.5216
0.7257 4.5999 7100 0.7569 0.5147
0.7576 4.6647 7200 0.7586 0.5185
0.7378 4.7295 7300 0.7560 0.5224
0.7859 4.7943 7400 0.7566 0.5139
0.7404 4.8591 7500 0.7546 0.5193
0.7355 4.9239 7600 0.7572 0.5228
0.7672 4.9887 7700 0.7575 0.5270

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
44.7M params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RylanSchaeffer/pythia-70m_tatsu-lab_alpaca_farm_sftsd0_policy_pythia-6.9b_gold_offsetbias-8b_noise0.25_rmsd0