happii commited on
Commit
340a6f6
1 Parent(s): 5f9805d

Model save

Browse files
README.md CHANGED
@@ -2,15 +2,11 @@
2
  license: apache-2.0
3
  base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
- - alignment-handbook
6
- - trl
7
- - sft
8
- - generated_from_trainer
9
  - trl
10
  - sft
11
  - generated_from_trainer
12
  datasets:
13
- - HuggingFaceH4/ultrachat_200k
14
  model-index:
15
  - name: zephyr-7b-sft-full
16
  results: []
@@ -21,9 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  # zephyr-7b-sft-full
23
 
24
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
25
  It achieves the following results on the evaluation set:
26
- - Loss: 0.9491
27
 
28
  ## Model description
29
 
@@ -43,13 +39,13 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
- - train_batch_size: 16
47
- - eval_batch_size: 8
48
  - seed: 42
49
  - distributed_type: multi-GPU
50
  - num_devices: 4
51
- - total_train_batch_size: 64
52
- - total_eval_batch_size: 32
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: cosine
55
  - lr_scheduler_warmup_ratio: 0.1
@@ -59,7 +55,7 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss |
61
  |:-------------:|:-----:|:----:|:---------------:|
62
- | 0.9409 | 1.0 | 2179 | 0.9491 |
63
 
64
 
65
  ### Framework versions
 
2
  license: apache-2.0
3
  base_model: mistralai/Mistral-7B-v0.1
4
  tags:
 
 
 
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
  datasets:
9
+ - generator
10
  model-index:
11
  - name: zephyr-7b-sft-full
12
  results: []
 
17
 
18
  # zephyr-7b-sft-full
19
 
20
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.1126
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 2e-05
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 16
44
  - seed: 42
45
  - distributed_type: multi-GPU
46
  - num_devices: 4
47
+ - total_train_batch_size: 128
48
+ - total_eval_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: cosine
51
  - lr_scheduler_warmup_ratio: 0.1
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-----:|:----:|:---------------:|
58
+ | 1.0697 | 1.0 | 4358 | 1.1126 |
59
 
60
 
61
  ### Framework versions
all_results.json CHANGED
@@ -1,14 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
- "eval_loss": 0.9490520358085632,
4
- "eval_runtime": 757.9326,
5
- "eval_samples": 23109,
6
- "eval_samples_per_second": 20.359,
7
- "eval_steps_per_second": 0.637,
8
  "total_flos": 456238269726720.0,
9
- "train_loss": 1.0045825210718344,
10
- "train_runtime": 27286.1988,
11
  "train_samples": 207864,
12
- "train_samples_per_second": 5.11,
13
- "train_steps_per_second": 0.08
14
  }
 
1
  {
2
  "epoch": 1.0,
 
 
 
 
 
3
  "total_flos": 456238269726720.0,
4
+ "train_loss": 1.1632422284009862,
5
+ "train_runtime": 29234.3114,
6
  "train_samples": 207864,
7
+ "train_samples_per_second": 19.077,
8
+ "train_steps_per_second": 0.149
9
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52142cc9d392c1aec9d10221e84c064bda396e12e4093aeeb76ef999d2702285
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abbc6a86071a19b58a3f3f4d371aa81787f020a24b0c81431104013eea616520
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29336a781a3cc0ec66f6c43b91382418b9210b100a11aa2687674fc1571a5278
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2cc2dc988a31beb0a24c7b4b93ddaa529af3355c75533fae481ddfa96674a01
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c7e1a8f431fa71638a7a3e4c5bf239419d3a0224c01682c15b326b1f78caf76
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:083d153dc313d61ae479d3b1862a3477f5d05e9e9affb6d1d9d2e25573215594
3
  size 4540516344
runs/May24_19-38-16_ubuntu/events.out.tfevents.1716580221.ubuntu.2195002.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f4ffecdd676e69047a78e332d778612c7da27eaa8037c97783a9ce4649835ba
3
- size 186394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8ab2d791a94c8a3285b2cc7b0e748920cbaedfee83e4b00c23da89f1996a337
3
+ size 189340
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 456238269726720.0,
4
- "train_loss": 1.0045825210718344,
5
- "train_runtime": 27286.1988,
6
  "train_samples": 207864,
7
- "train_samples_per_second": 5.11,
8
- "train_steps_per_second": 0.08
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 456238269726720.0,
4
+ "train_loss": 1.1632422284009862,
5
+ "train_runtime": 29234.3114,
6
  "train_samples": 207864,
7
+ "train_samples_per_second": 19.077,
8
+ "train_steps_per_second": 0.149
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff