tuanna08go commited on
Commit
d6814b7
·
verified ·
1 Parent(s): 0fb37b7

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -3
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -66,7 +66,7 @@ lora_model_dir: null
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
- max_steps: 1
70
  micro_batch_size: 8
71
  mlflow_experiment_name: /tmp/dc0abfe34c05583b_train_data.json
72
  model_type: AutoModelForCausalLM
@@ -91,7 +91,7 @@ wandb_name: 6af7590a-a64a-4086-ae7c-b96b1ade6ef8
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 6af7590a-a64a-4086-ae7c-b96b1ade6ef8
94
- warmup_steps: 1
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
@@ -102,6 +102,8 @@ xformers_attention: null
102
  # 6af7590a-a64a-4086-ae7c-b96b1ade6ef8
103
 
104
  This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
 
 
105
 
106
  ## Model description
107
 
@@ -129,13 +131,17 @@ The following hyperparameters were used during training:
129
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
130
  - lr_scheduler_type: cosine
131
  - lr_scheduler_warmup_steps: 2
132
- - training_steps: 1
133
 
134
  ### Training results
135
 
136
  | Training Loss | Epoch | Step | Validation Loss |
137
  |:-------------:|:------:|:----:|:---------------:|
138
  | No log | 0.0590 | 1 | 4.4012 |
 
 
 
 
139
 
140
 
141
  ### Framework versions
 
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
+ max_steps: 50
70
  micro_batch_size: 8
71
  mlflow_experiment_name: /tmp/dc0abfe34c05583b_train_data.json
72
  model_type: AutoModelForCausalLM
 
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 6af7590a-a64a-4086-ae7c-b96b1ade6ef8
94
+ warmup_steps: 2
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
 
102
  # 6af7590a-a64a-4086-ae7c-b96b1ade6ef8
103
 
104
  This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
105
+ It achieves the following results on the evaluation set:
106
+ - Loss: 1.5313
107
 
108
  ## Model description
109
 
 
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 2
134
+ - training_steps: 17
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
  | No log | 0.0590 | 1 | 4.4012 |
141
+ | No log | 0.2362 | 4 | 3.7896 |
142
+ | No log | 0.4723 | 8 | 2.3424 |
143
+ | 3.3311 | 0.7085 | 12 | 1.6681 |
144
+ | 3.3311 | 0.9446 | 16 | 1.5313 |
145
 
146
 
147
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:059f0e5fe1bda4258b8c47db28a35d49852cae57ad4e36b6cfc4aed416fd8b2e
3
  size 84047370
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcc4d9bee8574ac7c849b234b307bc60ff606e49f2d0436c94c5da6c840b5aa9
3
  size 84047370