Rocketknight1 HF staff commited on
Commit
1e60340
1 Parent(s): 979e288

Training in progress epoch 0

Browse files
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: Rocketknight1/bert-base-uncased-finetuned-swag
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # Rocketknight1/bert-base-uncased-finetuned-swag
14
+
15
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.8360
18
+ - Train Accuracy: 0.6631
19
+ - Validation Loss: 0.5885
20
+ - Validation Accuracy: 0.7706
21
+ - Epoch: 0
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
+ - training_precision: float32
42
+
43
+ ### Training results
44
+
45
+ | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
46
+ |:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
47
+ | 0.8360 | 0.6631 | 0.5885 | 0.7706 | 0 |
48
+
49
+
50
+ ### Framework versions
51
+
52
+ - Transformers 4.18.0.dev0
53
+ - TensorFlow 2.8.0-rc0
54
+ - Datasets 2.0.1.dev0
55
+ - Tokenizers 0.11.0
config.json CHANGED
@@ -18,7 +18,7 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
- "transformers_version": "4.12.0.dev0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 30522
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
+ "transformers_version": "4.18.0.dev0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 30522
logs/train/events.out.tfevents.1650908224.matt-TRX40-AORUS-PRO-WIFI.66086.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:679a6680e929144b923c477c4d5f3ca7f6a135dca4aed668873f19d22ba3592f
3
+ size 2752534
logs/train/events.out.tfevents.1650908333.matt-TRX40-AORUS-PRO-WIFI.66086.1.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:018243c563572a6d0a6301e5f06502651f1275d8062412bfe8ad88bc47964df3
3
+ size 2755945
logs/train/events.out.tfevents.1650908537.matt-TRX40-AORUS-PRO-WIFI.66086.2.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c52f64e12ee21cd2487b082c2a74e2ff8ade61977987e434679acf3202ebd5ba
3
+ size 2793527
logs/train/events.out.tfevents.1650908582.matt-TRX40-AORUS-PRO-WIFI.66086.3.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa71c71936d187ac254cfc9620979a2418e06f4623f4651e222dee7288292930
3
+ size 2795277
logs/train/events.out.tfevents.1650908775.matt-TRX40-AORUS-PRO-WIFI.66086.4.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bffacc58800e94aba8471c1b9be97f75c24573b8098b9951568a3396cecf4a9
3
+ size 2798541
logs/validation/events.out.tfevents.1650909522.matt-TRX40-AORUS-PRO-WIFI.66086.5.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c6e84569caad0feac9e46c1b15b10ffb945f5246f3b8b7726684ea9626939a0
3
+ size 356
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90f01c57fa72e66e7e70400354e8380decfd173752f23e9ec371e324f9f94819
3
  size 438203732
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a32986871ca9158bffcb4e3344cf9539d0a373301a00ef720cd9333567084fa
3
  size 438203732
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff