dd123 commited on
Commit
1493af1
1 Parent(s): eabddb8

End of training

Browse files
README.md CHANGED
@@ -2,26 +2,11 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - test_data_huggingface
7
  metrics:
8
  - f1
9
  model-index:
10
  - name: test_model
11
- results:
12
- - task:
13
- name: Text Classification
14
- type: text-classification
15
- dataset:
16
- name: test_data_huggingface
17
- type: test_data_huggingface
18
- config: default
19
- split: test
20
- args: default
21
- metrics:
22
- - name: F1
23
- type: f1
24
- value: 0.8808660461929578
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # test_model
31
 
32
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the test_data_huggingface dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.6842
35
- - F1: 0.8809
36
 
37
  ## Model description
38
 
@@ -52,7 +37,7 @@ More information needed
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 0.0001
55
- - train_batch_size: 32
56
  - eval_batch_size: 16
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -63,14 +48,14 @@ The following hyperparameters were used during training:
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | F1 |
65
  |:-------------:|:-----:|:----:|:---------------:|:------:|
66
- | No log | 1.0 | 47 | 1.0471 | 0.6157 |
67
- | No log | 2.0 | 94 | 0.7371 | 0.6955 |
68
- | No log | 3.0 | 141 | 0.6842 | 0.8809 |
69
 
70
 
71
  ### Framework versions
72
 
73
- - Transformers 4.29.1
74
  - Pytorch 2.0.1+cu117
75
- - Datasets 2.12.0
76
  - Tokenizers 0.13.3
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  metrics:
6
  - f1
7
  model-index:
8
  - name: test_model
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
14
 
15
  # test_model
16
 
17
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.4401
20
+ - F1: 0.6002
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0001
40
+ - train_batch_size: 64
41
  - eval_batch_size: 16
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | F1 |
50
  |:-------------:|:-----:|:----:|:---------------:|:------:|
51
+ | No log | 1.0 | 176 | 1.2690 | 0.5778 |
52
+ | 1.5547 | 2.0 | 352 | 1.3403 | 0.5824 |
53
+ | 0.9024 | 3.0 | 528 | 1.4401 | 0.6002 |
54
 
55
 
56
  ### Framework versions
57
 
58
+ - Transformers 4.27.1
59
  - Pytorch 2.0.1+cu117
60
+ - Datasets 2.6.1
61
  - Tokenizers 0.13.3
logs/events.out.tfevents.1686760955.ls.3626797.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a65db55a4efaf9b1788c3571f28451d4fcc76ce5f785efbf5d0eeb681240bec9
3
- size 5712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85acc19c0c3fe4d47de65004e9cc20ae4192efb81d66cba37a27b3ebebaa9f54
3
+ size 6066
tokenizer_config.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "clean_up_tokenization_spaces": true,
3
  "cls_token": "[CLS]",
4
  "do_lower_case": true,
5
  "mask_token": "[MASK]",
6
  "model_max_length": 512,
7
  "pad_token": "[PAD]",
8
  "sep_token": "[SEP]",
 
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "BertTokenizer",
 
1
  {
 
2
  "cls_token": "[CLS]",
3
  "do_lower_case": true,
4
  "mask_token": "[MASK]",
5
  "model_max_length": 512,
6
  "pad_token": "[PAD]",
7
  "sep_token": "[SEP]",
8
+ "special_tokens_map_file": null,
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "BertTokenizer",