mayukhbis commited on
Commit
2f489c0
1 Parent(s): 04771ea

End of training

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - unsloth
8
+ - generated_from_trainer
9
+ base_model: unsloth/llama-3-8b-bnb-4bit
10
+ model-index:
11
+ - name: llama3_fine_tuned_15e
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # llama3_fine_tuned_15e
19
+
20
+ This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on an unknown dataset.
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0002
40
+ - train_batch_size: 4
41
+ - eval_batch_size: 8
42
+ - seed: 3407
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 16
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 5
48
+ - num_epochs: 13
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.10.0
58
+ - Transformers 4.40.1
59
+ - Pytorch 2.2.1+cu121
60
+ - Datasets 2.19.1
61
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:783f40a47addc20ca54057eee460cd6c8fb753d002fef03f2d64e9bc40154f6f
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f66d44d81396d4dc76a4f9afc520ff44306f930c28a8d736c56591dc70510634
3
  size 167832240
runs/May08_02-50-07_309712b70d33/events.out.tfevents.1715136610.309712b70d33.223.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:02837b11f45df89cf7b9c18f87f2736577d5385f4ee108203917102836dc1236
3
- size 110151
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9cab900cdc00c678f2cb124dd36db365d10d209fa9857eac0e57bc7bc8d4325
3
+ size 120211
tokenizer_config.json CHANGED
@@ -2058,6 +2058,6 @@
2058
  ],
2059
  "model_max_length": 8192,
2060
  "pad_token": "<|reserved_special_token_250|>",
2061
- "padding_side": "right",
2062
  "tokenizer_class": "PreTrainedTokenizerFast"
2063
  }
 
2058
  ],
2059
  "model_max_length": 8192,
2060
  "pad_token": "<|reserved_special_token_250|>",
2061
+ "padding_side": "left",
2062
  "tokenizer_class": "PreTrainedTokenizerFast"
2063
  }