chchen commited on
Commit
e9a1949
·
verified ·
1 Parent(s): 92ac90f

Model save

Browse files
Files changed (2) hide show
  1. README.md +67 -0
  2. trainer_log.jsonl +3 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-3.1-8B-Instruct
3
+ library_name: peft
4
+ license: llama3.1
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Llama-3.1-8B-Instruct-SFT-400
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Llama-3.1-8B-Instruct-SFT-400
17
+
18
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.0645
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 5e-06
40
+ - train_batch_size: 2
41
+ - eval_batch_size: 2
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 16
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.1
48
+ - num_epochs: 10.0
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:------:|:----:|:---------------:|
55
+ | 0.8183 | 2.2222 | 50 | 0.6115 |
56
+ | 0.1498 | 4.4444 | 100 | 0.0840 |
57
+ | 0.0829 | 6.6667 | 150 | 0.0651 |
58
+ | 0.0952 | 8.8889 | 200 | 0.0645 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - PEFT 0.12.0
64
+ - Transformers 4.45.2
65
+ - Pytorch 2.3.0
66
+ - Datasets 2.19.0
67
+ - Tokenizers 0.20.0
trainer_log.jsonl CHANGED
@@ -22,3 +22,6 @@
22
  {"current_steps": 190, "total_steps": 220, "loss": 0.0952, "learning_rate": 2.963665913810451e-07, "epoch": 8.444444444444445, "percentage": 86.36, "elapsed_time": "0:04:26", "remaining_time": "0:00:42"}
23
  {"current_steps": 200, "total_steps": 220, "loss": 0.0952, "learning_rate": 1.3749795321332887e-07, "epoch": 8.88888888888889, "percentage": 90.91, "elapsed_time": "0:04:39", "remaining_time": "0:00:27"}
24
  {"current_steps": 200, "total_steps": 220, "eval_loss": 0.06450273841619492, "epoch": 8.88888888888889, "percentage": 90.91, "elapsed_time": "0:04:40", "remaining_time": "0:00:28"}
 
 
 
 
22
  {"current_steps": 190, "total_steps": 220, "loss": 0.0952, "learning_rate": 2.963665913810451e-07, "epoch": 8.444444444444445, "percentage": 86.36, "elapsed_time": "0:04:26", "remaining_time": "0:00:42"}
23
  {"current_steps": 200, "total_steps": 220, "loss": 0.0952, "learning_rate": 1.3749795321332887e-07, "epoch": 8.88888888888889, "percentage": 90.91, "elapsed_time": "0:04:39", "remaining_time": "0:00:27"}
24
  {"current_steps": 200, "total_steps": 220, "eval_loss": 0.06450273841619492, "epoch": 8.88888888888889, "percentage": 90.91, "elapsed_time": "0:04:40", "remaining_time": "0:00:28"}
25
+ {"current_steps": 210, "total_steps": 220, "loss": 0.0727, "learning_rate": 3.798061746947995e-08, "epoch": 9.333333333333334, "percentage": 95.45, "elapsed_time": "0:04:55", "remaining_time": "0:00:14"}
26
+ {"current_steps": 220, "total_steps": 220, "loss": 0.1111, "learning_rate": 3.146808153123293e-10, "epoch": 9.777777777777779, "percentage": 100.0, "elapsed_time": "0:05:08", "remaining_time": "0:00:00"}
27
+ {"current_steps": 220, "total_steps": 220, "epoch": 9.777777777777779, "percentage": 100.0, "elapsed_time": "0:05:10", "remaining_time": "0:00:00"}