pippinnie's picture
Training in progress epoch 33
42f193b
|
raw
history blame
2.71 kB
metadata
license: apache-2.0
base_model: distilgpt2
tags:
  - generated_from_keras_callback
model-index:
  - name: pippinnie/distilgpt2-finetuned-cyber-readme-v2
    results: []

pippinnie/distilgpt2-finetuned-cyber-readme-v2

This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 2.4184
  • Validation Loss: 3.0449
  • Epoch: 33

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
4.1088 3.9258 0
3.9271 3.7983 1
3.7845 3.6781 2
3.6677 3.6006 3
3.5681 3.5272 4
3.4803 3.4643 5
3.4027 3.4068 6
3.3316 3.3671 7
3.2666 3.3179 8
3.2072 3.2817 9
3.1517 3.2565 10
3.1007 3.2283 11
3.0527 3.2051 12
3.0079 3.1826 13
2.9651 3.1590 14
2.9245 3.1529 15
2.8862 3.1404 16
2.8493 3.1245 17
2.8147 3.1075 18
2.7814 3.1077 19
2.7497 3.1036 20
2.7186 3.0859 21
2.6890 3.0722 22
2.6608 3.0842 23
2.6327 3.0561 24
2.6060 3.0477 25
2.5804 3.0663 26
2.5552 3.0479 27
2.5310 3.0426 28
2.5066 3.0420 29
2.4842 3.0671 30
2.4613 3.0414 31
2.4394 3.0331 32
2.4184 3.0449 33

Framework versions

  • Transformers 4.38.2
  • TensorFlow 2.16.1
  • Datasets 2.18.0
  • Tokenizers 0.15.2