mtzig's picture
Model save
cd53884 verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
model-index:
  - name: lltransformer-linear-test1
    results: []

lltransformer-linear-test1

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.3793

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1234
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
6.8532 0.0320 100 6.7483
6.1754 0.0640 200 6.1243
5.8756 0.0959 300 5.7804
5.5348 0.1279 400 5.5261
5.2918 0.1599 500 5.3721
5.329 0.1919 600 5.2467
5.0479 0.2239 700 5.1346
5.0769 0.2559 800 5.0477
4.9082 0.2878 900 4.9726
4.8851 0.3198 1000 4.9025
4.8578 0.3518 1100 4.8424
4.7683 0.3838 1200 4.7891
4.7845 0.4158 1300 4.7421
4.7651 0.4477 1400 4.6986
4.6101 0.4797 1500 4.6589
4.5814 0.5117 1600 4.6180
4.5607 0.5437 1700 4.5858
4.62 0.5757 1800 4.5545
4.4465 0.6076 1900 4.5254
4.5038 0.6396 2000 4.5018
4.4746 0.6716 2100 4.4765
4.4328 0.7036 2200 4.4544
4.4182 0.7356 2300 4.4368
4.4987 0.7676 2400 4.4215
4.4017 0.7995 2500 4.4085
4.4284 0.8315 2600 4.3983
4.3105 0.8635 2700 4.3901
4.2949 0.8955 2800 4.3846
4.3673 0.9275 2900 4.3812
4.3048 0.9594 3000 4.3796
4.4036 0.9914 3100 4.3793

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.0+cu128
  • Datasets 3.5.0
  • Tokenizers 0.21.1