Edit model card

hsohn3/mayo-timebert-visit-uncased-wordlevel-block512-batch4-ep100

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.8536
  • Epoch: 99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Epoch
3.9508 0
3.4063 1
3.3682 2
3.3468 3
3.3330 4
3.3308 5
3.3225 6
3.3106 7
3.2518 8
3.1859 9
3.1373 10
3.0923 11
3.0390 12
2.9560 13
2.8605 14
2.7564 15
2.4969 16
2.2044 17
1.9566 18
1.7686 19
1.5995 20
1.4932 21
1.4100 22
1.3538 23
1.2973 24
1.2610 25
1.2160 26
1.1916 27
1.1607 28
1.1468 29
1.1262 30
1.1123 31
1.0942 32
1.0816 33
1.0717 34
1.0575 35
1.0503 36
1.0411 37
1.0293 38
1.0229 39
1.0139 40
1.0081 41
1.0028 42
0.9967 43
0.9906 44
0.9834 45
0.9782 46
0.9766 47
0.9676 48
0.9618 49
0.9611 50
0.9553 51
0.9504 52
0.9483 53
0.9404 54
0.9423 55
0.9361 56
0.9327 57
0.9327 58
0.9263 59
0.9275 60
0.9218 61
0.9202 62
0.9158 63
0.9152 64
0.9091 65
0.9104 66
0.9094 67
0.9087 68
0.9034 69
0.9063 70
0.8984 71
0.8966 72
0.8953 73
0.8910 74
0.8913 75
0.8887 76
0.8868 77
0.8868 78
0.8815 79
0.8821 80
0.8791 81
0.8752 82
0.8731 83
0.8779 84
0.8727 85
0.8702 86
0.8712 87
0.8689 88
0.8646 89
0.8644 90
0.8608 91
0.8643 92
0.8602 93
0.8605 94
0.8568 95
0.8567 96
0.8557 97
0.8543 98
0.8536 99

Framework versions

  • Transformers 4.20.1
  • TensorFlow 2.8.2
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.