Edit model card

watson-cerebras

This model is a fine-tuned version of cerebras/Cerebras-GPT-111M on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.4660
  • Validation Loss: 5.9842
  • Epoch: 9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
1.9574 4.6971 0
1.4977 4.9437 1
1.1154 5.1888 2
0.8291 5.4171 3
0.6540 5.5523 4
0.5654 5.6664 5
0.5230 5.7875 6
0.4976 5.8304 7
0.4838 5.8645 8
0.4660 5.9842 9

Framework versions

  • Transformers 4.30.2
  • TensorFlow 2.12.0
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
2