RikkiXu commited on
Commit
d9c464c
1 Parent(s): 20f1d01

Model save

Browse files
README.md CHANGED
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [imone/Mistral_7B_with_EOT_token](https://huggingface.co/imone/Mistral_7B_with_EOT_token) on the generator dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3284
23
 
24
  ## Model description
25
 
@@ -38,13 +38,13 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 5e-06
42
- - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - distributed_type: multi-GPU
46
  - num_devices: 8
47
- - total_train_batch_size: 128
48
  - total_eval_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: cosine
@@ -53,18 +53,18 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss |
57
- |:-------------:|:-----:|:----:|:---------------:|
58
- | 0.6695 | 1.0 | 572 | 0.6610 |
59
- | 0.523 | 2.0 | 1144 | 0.4895 |
60
- | 0.3858 | 3.0 | 1716 | 0.3774 |
61
- | 0.281 | 4.0 | 2288 | 0.3296 |
62
- | 0.229 | 5.0 | 2860 | 0.3284 |
63
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.40.0
68
  - Pytorch 2.1.2+cu118
69
- - Datasets 2.19.0
70
- - Tokenizers 0.19.1
 
19
 
20
  This model is a fine-tuned version of [imone/Mistral_7B_with_EOT_token](https://huggingface.co/imone/Mistral_7B_with_EOT_token) on the generator dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.9067
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 2e-05
42
+ - train_batch_size: 4
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - distributed_type: multi-GPU
46
  - num_devices: 8
47
+ - total_train_batch_size: 32
48
  - total_eval_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: cosine
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:-----:|:-----:|:---------------:|
58
+ | 0.0936 | 1.0 | 2315 | 0.9145 |
59
+ | 0.7793 | 2.0 | 4630 | 5.4135 |
60
+ | 0.3835 | 3.0 | 6945 | 3.1220 |
61
+ | 0.0959 | 4.0 | 9260 | 1.1934 |
62
+ | 0.0725 | 5.0 | 11575 | 0.9067 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.38.2
68
  - Pytorch 2.1.2+cu118
69
+ - Datasets 2.16.1
70
+ - Tokenizers 0.15.2
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 32000,
5
- "transformers_version": "4.40.0"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 32000,
5
+ "transformers_version": "4.38.2"
6
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:898125394c1d8957af8e8197984041948f092403b9c85fe87aec527b3afdf81f
3
  size 4943178720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40a7faad9a9857f9eaf97bb695b2ce6a6795902aee573bf142e57c3c236a4f6a
3
  size 4943178720
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a08db4ad63d0d59fb7c4c8aae22ebf56b86d423b0f8edb10767234dd10bcc2f8
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b7aa038fc1b07012b73042a53ca58f1c1109a00509a0e7534f51a63047d481d
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db240af94994de350ee07c7b12e012e1177fb0025d5f618ae4f7bf023418ecc9
3
  size 4540532728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9bb26ed2992db60cf261faacd1530a1cb1c0d33e6a60764cf6372ec0fe57ec6
3
  size 4540532728
runs/Apr23_17-57-53_n136-085-012/events.out.tfevents.1713866692.n136-085-012.1067198.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8742be978104585561281e68bf12fbc1943722a69e20c8b7d2dcb9f54527dae9
3
- size 470173
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:310e8930ece76c8f527b5a51eabbee4fcd2f48501a7df259310937a132c50785
3
+ size 495063