Rocketknight1 HF staff commited on
Commit
61e4b04
1 Parent(s): 4717924

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -14,13 +14,13 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.0338
18
- - Validation Loss: 0.0617
19
- - Train Precision: 0.9234
20
- - Train Recall: 0.9344
21
- - Train F1: 0.9289
22
- - Train Accuracy: 0.9833
23
- - Epoch: 2
24
 
25
  ## Model description
26
 
@@ -46,14 +46,12 @@ The following hyperparameters were used during training:
46
 
47
  | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
48
  |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
49
- | 0.1901 | 0.0718 | 0.9022 | 0.9228 | 0.9124 | 0.9801 | 0 |
50
- | 0.0539 | 0.0607 | 0.9202 | 0.9341 | 0.9271 | 0.9830 | 1 |
51
- | 0.0338 | 0.0617 | 0.9234 | 0.9344 | 0.9289 | 0.9833 | 2 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.19.0.dev0
57
- - TensorFlow 2.8.0-rc0
58
- - Datasets 2.1.1.dev0
59
  - Tokenizers 0.11.0
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.1966
18
+ - Validation Loss: 0.0701
19
+ - Train Precision: 0.9031
20
+ - Train Recall: 0.9223
21
+ - Train F1: 0.9126
22
+ - Train Accuracy: 0.9802
23
+ - Epoch: 0
24
 
25
  ## Model description
26
 
46
 
47
  | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
48
  |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
49
+ | 0.1966 | 0.0701 | 0.9031 | 0.9223 | 0.9126 | 0.9802 | 0 |
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
+ - Transformers 4.21.0.dev0
55
+ - TensorFlow 2.9.1
56
+ - Datasets 2.3.3.dev0
57
  - Tokenizers 0.11.0
config.json CHANGED
@@ -40,6 +40,6 @@
40
  "seq_classif_dropout": 0.2,
41
  "sinusoidal_pos_embds": false,
42
  "tie_weights_": true,
43
- "transformers_version": "4.19.0.dev0",
44
  "vocab_size": 30522
45
  }
40
  "seq_classif_dropout": 0.2,
41
  "sinusoidal_pos_embds": false,
42
  "tie_weights_": true,
43
+ "transformers_version": "4.21.0.dev0",
44
  "vocab_size": 30522
45
  }
logs/train/events.out.tfevents.1651825557.matt-TRX40-AORUS-PRO-WIFI.13322.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e04ff08078f25d9f5a73661d0740c1719c877bc61981544f0fea882e4ac3054f
3
+ size 1606428
logs/train/events.out.tfevents.1658418926.matt-TRX40-AORUS-PRO-WIFI.36922.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:282d6704d6a86358fc41e81f2ff7a34cb62226360b83579569b97f0f80453217
3
+ size 1604585
logs/validation/events.out.tfevents.1658418962.matt-TRX40-AORUS-PRO-WIFI.36922.1.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:182be15e1b8dd3c6d755d3e9e699c00a1b3142584d3aad9e5b2d8ea84159305b
3
+ size 194
special_tokens_map.json CHANGED
@@ -1 +1,7 @@
1
- {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34de92ee6d3616d17c11ab81e41fd587ec3277be01bbc2454811fbed23812b4e
3
  size 265606504
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9edfa2c9d2453a2fa712f282d3dc5a4167573913017650c42a42b05e8565ce8
3
  size 265606504
tokenizer_config.json CHANGED
@@ -1 +1,14 @@
1
- {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "distilbert-base-uncased", "tokenizer_class": "DistilBertTokenizer"}
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_lower_case": true,
4
+ "mask_token": "[MASK]",
5
+ "model_max_length": 512,
6
+ "name_or_path": "distilbert-base-uncased",
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "special_tokens_map_file": null,
10
+ "strip_accents": null,
11
+ "tokenize_chinese_chars": true,
12
+ "tokenizer_class": "DistilBertTokenizer",
13
+ "unk_token": "[UNK]"
14
+ }