athrado commited on
Commit
0e12283
1 Parent(s): be2a943

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +5 -4
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
@@ -33,7 +34,7 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2775, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
37
  - training_precision: float32
38
 
39
  ### Training results
@@ -42,7 +43,7 @@ The following hyperparameters were used during training:
42
 
43
  ### Framework versions
44
 
45
- - Transformers 4.30.2
46
- - TensorFlow 2.12.0
47
- - Datasets 2.13.1
48
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: bert-base-uncased
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2775, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
38
  - training_precision: float32
39
 
40
  ### Training results
 
43
 
44
  ### Framework versions
45
 
46
+ - Transformers 4.31.0
47
+ - TensorFlow 2.13.0
48
+ - Datasets 2.14.1
49
  - Tokenizers 0.13.3
config.json CHANGED
@@ -28,7 +28,7 @@
28
  "num_hidden_layers": 12,
29
  "pad_token_id": 0,
30
  "position_embedding_type": "absolute",
31
- "transformers_version": "4.30.2",
32
  "type_vocab_size": 2,
33
  "use_cache": true,
34
  "vocab_size": 30522
 
28
  "num_hidden_layers": 12,
29
  "pad_token_id": 0,
30
  "position_embedding_type": "absolute",
31
+ "transformers_version": "4.31.0",
32
  "type_vocab_size": 2,
33
  "use_cache": true,
34
  "vocab_size": 30522
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9570b1bc29b01b3af2aade89d8cd5bc73e6b45afa3ea4d70b2df37ed8a5474ce
3
  size 438226204
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22a18e06e6299638a5871e8dc747a64369456ccaa8c4cf81c3c26554c2c239a5
3
  size 438226204