loose-balanced_seed-42_1e-3

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0197
  • Accuracy: 0.4204

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 32000
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
6.1733 0.9999 1788 4.2544 0.3064
4.0439 1.9999 3576 3.7251 0.3482
3.589 2.9998 5364 3.4746 0.3712
3.3578 3.9997 7152 3.3422 0.3835
3.2717 4.9997 8940 3.2619 0.3912
3.1665 5.9996 10728 3.2139 0.3953
3.1029 6.9995 12516 3.1810 0.3988
3.0572 8.0 14305 3.1586 0.4011
3.0269 8.9999 16093 3.1420 0.4031
2.9818 9.9999 17881 3.1302 0.4046
2.9603 10.9998 19669 3.1196 0.4054
2.9518 11.9997 21457 3.1128 0.4065
2.9438 12.9997 23245 3.1099 0.4069
2.94 13.9996 25033 3.1040 0.4076
2.8983 14.9995 26821 3.1025 0.4079
2.9018 16.0 28610 3.0990 0.4080
2.9052 16.9999 30398 3.0976 0.4088
2.9108 17.9999 32186 3.0897 0.4097
2.8558 18.9998 33974 3.0418 0.4157
2.7033 19.9986 35760 3.0197 0.4204

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.5.1+cu124
  • Datasets 2.19.1
  • Tokenizers 0.20.0
Downloads last month
17
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for qing-yao/loose-balanced_seed-42_1e-3

Quantizations
1 model