hsohn3's picture
Upload README.md
6fc7d59
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/cchs-timebert-visit-uncased-wordlevel-block512-batch4-ep100
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/cchs-timebert-visit-uncased-wordlevel-block512-batch4-ep100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8009
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.8699 | 0 |
| 3.1667 | 1 |
| 3.1286 | 2 |
| 3.1169 | 3 |
| 3.1077 | 4 |
| 3.0989 | 5 |
| 3.0911 | 6 |
| 3.0896 | 7 |
| 3.0820 | 8 |
| 3.0856 | 9 |
| 3.0827 | 10 |
| 3.0800 | 11 |
| 3.0647 | 12 |
| 3.0396 | 13 |
| 3.0052 | 14 |
| 2.9879 | 15 |
| 2.9633 | 16 |
| 2.9449 | 17 |
| 2.9217 | 18 |
| 2.8921 | 19 |
| 2.8625 | 20 |
| 2.8153 | 21 |
| 2.7495 | 22 |
| 2.6202 | 23 |
| 2.3762 | 24 |
| 2.1064 | 25 |
| 1.8489 | 26 |
| 1.6556 | 27 |
| 1.5005 | 28 |
| 1.4110 | 29 |
| 1.3472 | 30 |
| 1.2896 | 31 |
| 1.2391 | 32 |
| 1.2001 | 33 |
| 1.1663 | 34 |
| 1.1418 | 35 |
| 1.1159 | 36 |
| 1.0987 | 37 |
| 1.0753 | 38 |
| 1.0608 | 39 |
| 1.0456 | 40 |
| 1.0381 | 41 |
| 1.0248 | 42 |
| 1.0127 | 43 |
| 0.9970 | 44 |
| 0.9958 | 45 |
| 0.9847 | 46 |
| 0.9789 | 47 |
| 0.9617 | 48 |
| 0.9575 | 49 |
| 0.9517 | 50 |
| 0.9442 | 51 |
| 0.9379 | 52 |
| 0.9350 | 53 |
| 0.9325 | 54 |
| 0.9235 | 55 |
| 0.9182 | 56 |
| 0.9139 | 57 |
| 0.9074 | 58 |
| 0.8984 | 59 |
| 0.8988 | 60 |
| 0.8958 | 61 |
| 0.8937 | 62 |
| 0.8853 | 63 |
| 0.8812 | 64 |
| 0.8758 | 65 |
| 0.8729 | 66 |
| 0.8732 | 67 |
| 0.8647 | 68 |
| 0.8634 | 69 |
| 0.8604 | 70 |
| 0.8577 | 71 |
| 0.8597 | 72 |
| 0.8508 | 73 |
| 0.8510 | 74 |
| 0.8450 | 75 |
| 0.8451 | 76 |
| 0.8398 | 77 |
| 0.8392 | 78 |
| 0.8345 | 79 |
| 0.8350 | 80 |
| 0.8329 | 81 |
| 0.8299 | 82 |
| 0.8257 | 83 |
| 0.8217 | 84 |
| 0.8192 | 85 |
| 0.8211 | 86 |
| 0.8208 | 87 |
| 0.8171 | 88 |
| 0.8166 | 89 |
| 0.8134 | 90 |
| 0.8124 | 91 |
| 0.8102 | 92 |
| 0.8133 | 93 |
| 0.8066 | 94 |
| 0.8023 | 95 |
| 0.8049 | 96 |
| 0.8024 | 97 |
| 0.7980 | 98 |
| 0.8009 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1