anhnv125 commited on
Commit
cf0f85c
1 Parent(s): 2c58f32

anhnv125/reward-model

Browse files
Files changed (4) hide show
  1. README.md +74 -0
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +2 -2
README.md CHANGED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ChaiML/reward_models_100_170000000_cp_498032
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: reward-model
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # reward-model
14
+
15
+ This model is a fine-tuned version of [ChaiML/reward_models_100_170000000_cp_498032](https://huggingface.co/ChaiML/reward_models_100_170000000_cp_498032) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.5733
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 1e-05
37
+ - train_batch_size: 3
38
+ - eval_batch_size: 3
39
+ - seed: 7
40
+ - gradient_accumulation_steps: 16
41
+ - total_train_batch_size: 48
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: cosine
44
+ - lr_scheduler_warmup_steps: 200
45
+ - num_epochs: 2
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 0.5918 | 0.12 | 200 | 0.6011 |
52
+ | 0.5998 | 0.24 | 400 | 0.5911 |
53
+ | 0.59 | 0.36 | 600 | 0.5863 |
54
+ | 0.585 | 0.49 | 800 | 0.5823 |
55
+ | 0.5646 | 0.61 | 1000 | 0.5803 |
56
+ | 0.6048 | 0.73 | 1200 | 0.5773 |
57
+ | 0.6041 | 0.85 | 1400 | 0.5784 |
58
+ | 0.5694 | 0.97 | 1600 | 0.5747 |
59
+ | 0.5855 | 1.09 | 1800 | 0.5767 |
60
+ | 0.5764 | 1.22 | 2000 | 0.5764 |
61
+ | 0.5606 | 1.34 | 2200 | 0.5799 |
62
+ | 0.5583 | 1.46 | 2400 | 0.5778 |
63
+ | 0.5656 | 1.58 | 2600 | 0.5731 |
64
+ | 0.5812 | 1.7 | 2800 | 0.5747 |
65
+ | 0.5911 | 1.82 | 3000 | 0.5731 |
66
+ | 0.5771 | 1.94 | 3200 | 0.5733 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.36.0.dev0
72
+ - Pytorch 2.0.1+cu117
73
+ - Datasets 2.15.0
74
+ - Tokenizers 0.15.0
config.json CHANGED
@@ -35,7 +35,7 @@
35
  }
36
  },
37
  "torch_dtype": "float32",
38
- "transformers_version": "4.34.1",
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
 
35
  }
36
  },
37
  "torch_dtype": "float32",
38
+ "transformers_version": "4.36.0.dev0",
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ce85abb6b8d303da65c45a05e5eec017c2995136a0095956442d1d3f2dec0e3
3
  size 497780432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c38e9e8392f9e96c91dfc57be141e1b7847a6087884cbf222a35e37c52d5fe02
3
  size 497780432
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1318d8ccd2e1aedee13b0a06a6be6c81e6da263c655cde47ca1117b3a444f3ab
3
- size 4027
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:543c9960a4d018bf3a5b02d4ff50e5222f6617ba527ca1280ed6acfaa3861863
3
+ size 4219