Holmeister commited on
Commit
7ef47c0
1 Parent(s): aa59006

End of training

Browse files
README.md CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4341
21
 
22
  ## Model description
23
 
@@ -45,17 +45,19 @@ The following hyperparameters were used during training:
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: constant
47
  - lr_scheduler_warmup_ratio: 0.03
48
- - num_epochs: 4
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
- | 1.2154 | 0.95 | 15 | 0.6392 |
56
- | 0.5041 | 1.97 | 31 | 0.4795 |
57
- | 0.4393 | 2.98 | 47 | 0.4476 |
58
- | 0.394 | 3.81 | 60 | 0.4341 |
 
 
59
 
60
 
61
  ### Framework versions
@@ -63,5 +65,5 @@ The following hyperparameters were used during training:
63
  - PEFT 0.7.2.dev0
64
  - Transformers 4.36.2
65
  - Pytorch 2.1.0+cu121
66
- - Datasets 2.16.0
67
  - Tokenizers 0.15.0
 
17
 
18
  This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4400
21
 
22
  ## Model description
23
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: constant
47
  - lr_scheduler_warmup_ratio: 0.03
48
+ - num_epochs: 6
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
+ | 1.2165 | 0.95 | 15 | 0.6419 |
56
+ | 0.505 | 1.97 | 31 | 0.4841 |
57
+ | 0.4416 | 2.98 | 47 | 0.4493 |
58
+ | 0.3976 | 4.0 | 63 | 0.4346 |
59
+ | 0.375 | 4.95 | 78 | 0.4301 |
60
+ | 0.2842 | 5.71 | 90 | 0.4400 |
61
 
62
 
63
  ### Framework versions
 
65
  - PEFT 0.7.2.dev0
66
  - Transformers 4.36.2
67
  - Pytorch 2.1.0+cu121
68
+ - Datasets 2.16.1
69
  - Tokenizers 0.15.0
adapter_config.json CHANGED
@@ -20,12 +20,12 @@
20
  "revision": null,
21
  "target_modules": [
22
  "v_proj",
23
- "up_proj",
24
- "k_proj",
25
  "down_proj",
 
26
  "q_proj",
27
- "gate_proj",
28
- "o_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_rslora": false
 
20
  "revision": null,
21
  "target_modules": [
22
  "v_proj",
23
+ "gate_proj",
 
24
  "down_proj",
25
+ "o_proj",
26
  "q_proj",
27
+ "k_proj",
28
+ "up_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:30ae70a00b0253bd3ed62a956c7e69b2fafa84e66b829897afe7feda0092230d
3
  size 639691872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df658483bf682d989359c38f5bdc9707504d28204e8a4e4bc40bed6b848c6f13
3
  size 639691872
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:78ddbe0eef941e835f27155ccda1d3cf4ac4f1b1c96e4bdc68ee6999d7fe581d
3
  size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a084d94a9d8923d7609a26d957e27547cbc0b3641f51178e4658e4b48fa72746
3
  size 4664