IrwinD commited on
Commit
6e34384
1 Parent(s): bc8c057

End of training

Browse files
Files changed (5) hide show
  1. README.md +66 -46
  2. config.json +0 -1
  3. model.safetensors +1 -1
  4. tokenizer_config.json +3 -1
  5. training_args.bin +2 -2
README.md CHANGED
@@ -2,12 +2,29 @@
2
  license: apache-2.0
3
  base_model: distilbert/distilbert-base-uncased
4
  tags:
 
 
5
  - generated_from_trainer
6
  datasets:
7
  - hdfs_rlhf_log_summary_dataset
 
 
8
  model-index:
9
  - name: log_sage_reward_model
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,7 +34,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the hdfs_rlhf_log_summary_dataset dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.0005
 
21
 
22
  ## Model description
23
 
@@ -37,57 +55,59 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1.41e-05
40
- - train_batch_size: 4
41
- - eval_batch_size: 4
42
  - seed: 42
 
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 40
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | No log | 1.0 | 11 | 0.0022 |
52
- | No log | 2.0 | 22 | 0.0049 |
53
- | No log | 3.0 | 33 | 0.0006 |
54
- | No log | 4.0 | 44 | 0.0006 |
55
- | No log | 5.0 | 55 | 0.0008 |
56
- | No log | 6.0 | 66 | 0.0003 |
57
- | No log | 7.0 | 77 | 0.0005 |
58
- | No log | 8.0 | 88 | 0.0010 |
59
- | No log | 9.0 | 99 | 0.0008 |
60
- | No log | 10.0 | 110 | 0.0007 |
61
- | No log | 11.0 | 121 | 0.0007 |
62
- | No log | 12.0 | 132 | 0.0006 |
63
- | No log | 13.0 | 143 | 0.0006 |
64
- | No log | 14.0 | 154 | 0.0004 |
65
- | No log | 15.0 | 165 | 0.0007 |
66
- | No log | 16.0 | 176 | 0.0007 |
67
- | No log | 17.0 | 187 | 0.0006 |
68
- | No log | 18.0 | 198 | 0.0004 |
69
- | No log | 19.0 | 209 | 0.0005 |
70
- | No log | 20.0 | 220 | 0.0006 |
71
- | No log | 21.0 | 231 | 0.0006 |
72
- | No log | 22.0 | 242 | 0.0006 |
73
- | No log | 23.0 | 253 | 0.0009 |
74
- | No log | 24.0 | 264 | 0.0006 |
75
- | No log | 25.0 | 275 | 0.0007 |
76
- | No log | 26.0 | 286 | 0.0005 |
77
- | No log | 27.0 | 297 | 0.0005 |
78
- | No log | 28.0 | 308 | 0.0004 |
79
- | No log | 29.0 | 319 | 0.0004 |
80
- | No log | 30.0 | 330 | 0.0005 |
81
- | No log | 31.0 | 341 | 0.0005 |
82
- | No log | 32.0 | 352 | 0.0005 |
83
- | No log | 33.0 | 363 | 0.0005 |
84
- | No log | 34.0 | 374 | 0.0004 |
85
- | No log | 35.0 | 385 | 0.0004 |
86
- | No log | 36.0 | 396 | 0.0005 |
87
- | No log | 37.0 | 407 | 0.0005 |
88
- | No log | 38.0 | 418 | 0.0005 |
89
- | No log | 39.0 | 429 | 0.0005 |
90
- | No log | 40.0 | 440 | 0.0005 |
91
 
92
 
93
  ### Framework versions
 
2
  license: apache-2.0
3
  base_model: distilbert/distilbert-base-uncased
4
  tags:
5
+ - trl
6
+ - reward-trainer
7
  - generated_from_trainer
8
  datasets:
9
  - hdfs_rlhf_log_summary_dataset
10
+ metrics:
11
+ - accuracy
12
  model-index:
13
  - name: log_sage_reward_model
14
+ results:
15
+ - task:
16
+ name: Text Classification
17
+ type: text-classification
18
+ dataset:
19
+ name: hdfs_rlhf_log_summary_dataset
20
+ type: hdfs_rlhf_log_summary_dataset
21
+ config: default
22
+ split: None
23
+ args: default
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 1.0
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the hdfs_rlhf_log_summary_dataset dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.1669
38
+ - Accuracy: 1.0
39
 
40
  ## Model description
41
 
 
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1.41e-05
58
+ - train_batch_size: 6
59
+ - eval_batch_size: 24
60
  - seed: 42
61
+ - gradient_accumulation_steps: 16
62
+ - total_train_batch_size: 96
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: linear
65
  - num_epochs: 40
66
 
67
  ### Training results
68
 
69
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
+ | No log | 1.0 | 1 | 0.6950 | 0.5 |
72
+ | No log | 2.0 | 2 | 0.6896 | 1.0 |
73
+ | No log | 3.0 | 3 | 0.6843 | 1.0 |
74
+ | No log | 4.0 | 4 | 0.6789 | 1.0 |
75
+ | No log | 5.0 | 5 | 0.6735 | 1.0 |
76
+ | No log | 6.0 | 6 | 0.6671 | 1.0 |
77
+ | No log | 7.0 | 7 | 0.6597 | 1.0 |
78
+ | No log | 8.0 | 8 | 0.6510 | 1.0 |
79
+ | No log | 9.0 | 9 | 0.6403 | 1.0 |
80
+ | 0.0839 | 10.0 | 10 | 0.6275 | 1.0 |
81
+ | 0.0839 | 11.0 | 11 | 0.6130 | 1.0 |
82
+ | 0.0839 | 12.0 | 12 | 0.5955 | 1.0 |
83
+ | 0.0839 | 13.0 | 13 | 0.5747 | 1.0 |
84
+ | 0.0839 | 14.0 | 14 | 0.5508 | 1.0 |
85
+ | 0.0839 | 15.0 | 15 | 0.5250 | 1.0 |
86
+ | 0.0839 | 16.0 | 16 | 0.4984 | 1.0 |
87
+ | 0.0839 | 17.0 | 17 | 0.4698 | 1.0 |
88
+ | 0.0839 | 18.0 | 18 | 0.4413 | 1.0 |
89
+ | 0.0839 | 19.0 | 19 | 0.4121 | 1.0 |
90
+ | 0.0658 | 20.0 | 20 | 0.3850 | 1.0 |
91
+ | 0.0658 | 21.0 | 21 | 0.3604 | 1.0 |
92
+ | 0.0658 | 22.0 | 22 | 0.3384 | 1.0 |
93
+ | 0.0658 | 23.0 | 23 | 0.3186 | 1.0 |
94
+ | 0.0658 | 24.0 | 24 | 0.2995 | 1.0 |
95
+ | 0.0658 | 25.0 | 25 | 0.2823 | 1.0 |
96
+ | 0.0658 | 26.0 | 26 | 0.2664 | 1.0 |
97
+ | 0.0658 | 27.0 | 27 | 0.2516 | 1.0 |
98
+ | 0.0658 | 28.0 | 28 | 0.2384 | 1.0 |
99
+ | 0.0658 | 29.0 | 29 | 0.2260 | 1.0 |
100
+ | 0.0346 | 30.0 | 30 | 0.2149 | 1.0 |
101
+ | 0.0346 | 31.0 | 31 | 0.2054 | 1.0 |
102
+ | 0.0346 | 32.0 | 32 | 0.1971 | 1.0 |
103
+ | 0.0346 | 33.0 | 33 | 0.1898 | 1.0 |
104
+ | 0.0346 | 34.0 | 34 | 0.1838 | 1.0 |
105
+ | 0.0346 | 35.0 | 35 | 0.1787 | 1.0 |
106
+ | 0.0346 | 36.0 | 36 | 0.1746 | 1.0 |
107
+ | 0.0346 | 37.0 | 37 | 0.1714 | 1.0 |
108
+ | 0.0346 | 38.0 | 38 | 0.1691 | 1.0 |
109
+ | 0.0346 | 39.0 | 39 | 0.1676 | 1.0 |
110
+ | 0.021 | 40.0 | 40 | 0.1669 | 1.0 |
111
 
112
 
113
  ### Framework versions
config.json CHANGED
@@ -20,7 +20,6 @@
20
  "n_heads": 12,
21
  "n_layers": 6,
22
  "pad_token_id": 0,
23
- "problem_type": "regression",
24
  "qa_dropout": 0.1,
25
  "seq_classif_dropout": 0.2,
26
  "sinusoidal_pos_embds": false,
 
20
  "n_heads": 12,
21
  "n_layers": 6,
22
  "pad_token_id": 0,
 
23
  "qa_dropout": 0.1,
24
  "seq_classif_dropout": 0.2,
25
  "sinusoidal_pos_embds": false,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:16869d0953d5b87a61040dee2caef1db68a03ed0adf7482b276d86381884e93c
3
  size 267829484
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed35b02ba7e11272f3ddca09d5f2c2ffae2b557e8ba3a98fbd69320f3a4c23bd
3
  size 267829484
tokenizer_config.json CHANGED
@@ -43,9 +43,11 @@
43
  },
44
  "clean_up_tokenization_spaces": true,
45
  "cls_token": "[CLS]",
 
46
  "do_lower_case": true,
47
  "mask_token": "[MASK]",
48
- "model_max_length": 1000000000000000019884624838656,
 
49
  "pad_token": "[PAD]",
50
  "sep_token": "[SEP]",
51
  "strip_accents": null,
 
43
  },
44
  "clean_up_tokenization_spaces": true,
45
  "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
  "do_lower_case": true,
48
  "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
  "pad_token": "[PAD]",
52
  "sep_token": "[SEP]",
53
  "strip_accents": null,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a519663b9a6387514f11ccce00d19ac348e481362fef0e7f53e66f3b08db7db
3
- size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f00328d8a44d896bbf900800303965952d869f74248f0d7dc15a100e5d582ea1
3
+ size 4984