LongRiver commited on
Commit
18db304
1 Parent(s): 77b27d4

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -15,13 +15,13 @@ probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 1.7583
19
- - Train End Logits Accuracy: 0.5813
20
- - Train Start Logits Accuracy: 0.5573
21
- - Validation Loss: 2.0446
22
- - Validation End Logits Accuracy: 0.5277
23
- - Validation Start Logits Accuracy: 0.4928
24
- - Epoch: 1
25
 
26
  ## Model description
27
 
@@ -40,20 +40,19 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4524, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
44
  - training_precision: float32
45
 
46
  ### Training results
47
 
48
  | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
49
  |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
50
- | 2.3577 | 0.5055 | 0.4987 | 2.1050 | 0.5151 | 0.4843 | 0 |
51
- | 1.7583 | 0.5813 | 0.5573 | 2.0446 | 0.5277 | 0.4928 | 1 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.40.1
57
  - TensorFlow 2.15.0
58
- - Datasets 2.19.0
59
- - Tokenizers 0.19.1
 
15
 
16
  This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.3715
19
+ - Train End Logits Accuracy: 0.5054
20
+ - Train Start Logits Accuracy: 0.4994
21
+ - Validation Loss: 2.0790
22
+ - Validation End Logits Accuracy: 0.5204
23
+ - Validation Start Logits Accuracy: 0.4818
24
+ - Epoch: 0
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 45240, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
44
  - training_precision: float32
45
 
46
  ### Training results
47
 
48
  | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
49
  |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
50
+ | 2.3715 | 0.5054 | 0.4994 | 2.0790 | 0.5204 | 0.4818 | 0 |
 
51
 
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.39.3
56
  - TensorFlow 2.15.0
57
+ - Datasets 2.18.0
58
+ - Tokenizers 0.15.2
config.json CHANGED
@@ -19,6 +19,6 @@
19
  "seq_classif_dropout": 0.2,
20
  "sinusoidal_pos_embds": false,
21
  "tie_weights_": true,
22
- "transformers_version": "4.40.1",
23
  "vocab_size": 28996
24
  }
 
19
  "seq_classif_dropout": 0.2,
20
  "sinusoidal_pos_embds": false,
21
  "tie_weights_": true,
22
+ "transformers_version": "4.39.3",
23
  "vocab_size": 28996
24
  }
logs/train/events.out.tfevents.1714957848.f7da57357404.85.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3640efc651edb68691bf6e81958f820c41ded334e9d2afd2d88d97a05cde6cb3
3
+ size 1441432
logs/validation/events.out.tfevents.1714958638.f7da57357404.85.1.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e7fecf2d0a13e69cbc9eff529faa0e91a56780080b12a3800942ad9391d55cd
3
+ size 604
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed50b783f2104f5ed6df36aca4f5bfdf4e10160233e841b5ab405e938b6a1867
3
  size 260895720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c6dfd63deb1ef336849e61d8886a36975c4abf646ec62bd2ed97bcf33b93587
3
  size 260895720
tokenizer_config.json CHANGED
@@ -45,7 +45,7 @@
45
  "cls_token": "[CLS]",
46
  "do_lower_case": false,
47
  "mask_token": "[MASK]",
48
- "model_max_length": 1000000000000000019884624838656,
49
  "pad_token": "[PAD]",
50
  "sep_token": "[SEP]",
51
  "strip_accents": null,
 
45
  "cls_token": "[CLS]",
46
  "do_lower_case": false,
47
  "mask_token": "[MASK]",
48
+ "model_max_length": 512,
49
  "pad_token": "[PAD]",
50
  "sep_token": "[SEP]",
51
  "strip_accents": null,