ironrock commited on
Commit
86548cd
1 Parent(s): 304c852

Model save

Browse files
Files changed (2) hide show
  1. README.md +88 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: meta-llama/Meta-Llama-3-8B
9
+ datasets:
10
+ - generator
11
+ model-index:
12
+ - name: WeniGPT-Agents-Llama3-1.0.8-SFT
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # WeniGPT-Agents-Llama3-1.0.8-SFT
20
+
21
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.3744
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 1
44
+ - eval_batch_size: 1
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 2
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_ratio: 0.03
51
+ - training_steps: 669
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 1.7877 | 0.1342 | 30 | 1.7082 |
59
+ | 1.3675 | 0.2685 | 60 | 1.4539 |
60
+ | 1.3782 | 0.4027 | 90 | 1.4236 |
61
+ | 1.3884 | 0.5369 | 120 | 1.3938 |
62
+ | 1.3448 | 0.6711 | 150 | 1.3899 |
63
+ | 1.3357 | 0.8054 | 180 | 1.3870 |
64
+ | 1.2788 | 0.9396 | 210 | 1.3739 |
65
+ | 1.2396 | 1.0738 | 240 | 1.3778 |
66
+ | 1.2949 | 1.2081 | 270 | 1.3809 |
67
+ | 1.337 | 1.3423 | 300 | 1.3792 |
68
+ | 1.3266 | 1.4765 | 330 | 1.3775 |
69
+ | 1.2735 | 1.6107 | 360 | 1.3744 |
70
+ | 1.2809 | 1.7450 | 390 | 1.3752 |
71
+ | 1.2383 | 1.8792 | 420 | 1.3775 |
72
+ | 1.2116 | 2.0134 | 450 | 1.3859 |
73
+ | 1.0153 | 2.1477 | 480 | 1.3926 |
74
+ | 1.2039 | 2.2819 | 510 | 1.3884 |
75
+ | 1.2451 | 2.4161 | 540 | 1.3886 |
76
+ | 1.2311 | 2.5503 | 570 | 1.3921 |
77
+ | 1.1299 | 2.6846 | 600 | 1.3941 |
78
+ | 1.2163 | 2.8188 | 630 | 1.3913 |
79
+ | 1.0719 | 2.9530 | 660 | 1.3907 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - PEFT 0.10.0
85
+ - Transformers 4.40.0
86
+ - Pytorch 2.1.0+cu118
87
+ - Datasets 2.18.0
88
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f35f54729146a97287770e3b85cb8e5446960d7a848baf9c820fca99253ad459
3
  size 4216374752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f98dd53088e84f2144759135274fc96311fd4e2a2d05f15372610d8ed93e612
3
  size 4216374752