thainq107 commited on
Commit
e603940
1 Parent(s): 133713b

End of training

Browse files
Files changed (5) hide show
  1. README.md +77 -0
  2. special_tokens_map.json +7 -0
  3. tokenizer.json +0 -0
  4. tokenizer_config.json +13 -0
  5. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: bert-base-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - banking77
8
+ metrics:
9
+ - f1
10
+ model-index:
11
+ - name: bert-base-banking77-pt2
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: banking77
18
+ type: banking77
19
+ config: default
20
+ split: test
21
+ args: default
22
+ metrics:
23
+ - name: F1
24
+ type: f1
25
+ value: 0.9308363539280016
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # bert-base-banking77-pt2
32
+
33
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.2978
36
+ - F1: 0.9308
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - num_epochs: 3
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | F1 |
66
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
67
+ | 1.2051 | 1.0 | 626 | 0.8669 | 0.8173 |
68
+ | 0.3924 | 2.0 | 1252 | 0.3695 | 0.9156 |
69
+ | 0.187 | 3.0 | 1878 | 0.2978 | 0.9308 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.33.1
75
+ - Pytorch 2.0.1
76
+ - Datasets 2.14.5
77
+ - Tokenizers 0.13.3
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "BertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff