NotShrirang commited on
Commit
a9cb355
1 Parent(s): 31f99bb

Upload TFAlbertForSequenceClassification

Browse files
Files changed (2) hide show
  1. README.md +25 -2
  2. tf_model.h5 +1 -1
README.md CHANGED
@@ -8,16 +8,39 @@ model-index:
8
  results: []
9
  ---
10
 
 
 
 
11
  # albert-spam-filter
12
 
13
- This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on [this](https://huggingface.co/datasets/NotShrirang/email-spam-filter/) dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ### Training hyperparameters
16
 
17
  The following hyperparameters were used during training:
18
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1290, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
19
  - training_precision: float32
20
 
 
 
 
 
21
  ### Framework versions
22
 
23
  - Transformers 4.34.0
 
8
  results: []
9
  ---
10
 
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
+
14
  # albert-spam-filter
15
 
16
+ This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1130, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
38
  - training_precision: float32
39
 
40
+ ### Training results
41
+
42
+
43
+
44
  ### Framework versions
45
 
46
  - Transformers 4.34.0
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f623ecbe03d69817ea8b2d860c2b3441073370a90f82b43e2a4624245e054aa5
3
  size 46781688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6842b32167e45f0d574816df3415fa7c71296b433eaaa1e916a3d66ee6fea81
3
  size 46781688