thainq107 commited on
Commit
94cea01
1 Parent(s): d2dc522

End of training

Browse files
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  license: apache-2.0
3
- base_model: bert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -22,7 +21,7 @@ model-index:
22
  metrics:
23
  - name: F1
24
  type: f1
25
- value: 0.8136639898903447
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.4394
36
- - F1: 0.8137
37
 
38
  ## Model description
39
 
@@ -53,23 +52,27 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 16
57
- - eval_batch_size: 8
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 1
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | F1 |
66
  |:-------------:|:-----:|:----:|:---------------:|:------:|
67
- | 1.6265 | 1.0 | 626 | 1.4394 | 0.8137 |
 
 
 
 
68
 
69
 
70
  ### Framework versions
71
 
72
- - Transformers 4.33.1
73
- - Pytorch 2.0.1
74
- - Datasets 2.14.5
75
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
 
21
  metrics:
22
  - name: F1
23
  type: f1
24
+ value: 0.9231109184950563
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.3604
35
+ - F1: 0.9231
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 5e-05
55
+ - train_batch_size: 64
56
+ - eval_batch_size: 64
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - num_epochs: 5
61
 
62
  ### Training results
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | F1 |
65
  |:-------------:|:-----:|:----:|:---------------:|:------:|
66
+ | No log | 1.0 | 157 | 1.8492 | 0.6818 |
67
+ | 2.8429 | 2.0 | 314 | 0.7977 | 0.8704 |
68
+ | 0.8817 | 3.0 | 471 | 0.4966 | 0.9071 |
69
+ | 0.3842 | 4.0 | 628 | 0.3884 | 0.9196 |
70
+ | 0.3842 | 5.0 | 785 | 0.3604 | 0.9231 |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.27.1
76
+ - Pytorch 2.0.1+cu118
77
+ - Datasets 2.9.0
78
  - Tokenizers 0.13.3
logs/events.out.tfevents.1694140185.f3b3fb09da5a.714.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:39e0d36b2e76b9554775eafd343ea8007e2d785f0ad261d35222bd4a52ae196a
3
- size 11356
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4494197324b391404b3c024e0d1332c0e5e29b42bfb5e6e7c5e49592013f751
3
+ size 11710
tokenizer_config.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "clean_up_tokenization_spaces": true,
3
  "cls_token": "[CLS]",
4
  "do_lower_case": true,
5
  "mask_token": "[MASK]",
6
  "model_max_length": 512,
7
  "pad_token": "[PAD]",
8
  "sep_token": "[SEP]",
 
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "BertTokenizer",
 
1
  {
 
2
  "cls_token": "[CLS]",
3
  "do_lower_case": true,
4
  "mask_token": "[MASK]",
5
  "model_max_length": 512,
6
  "pad_token": "[PAD]",
7
  "sep_token": "[SEP]",
8
+ "special_tokens_map_file": null,
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "BertTokenizer",