mrcuddle commited on
Commit
213672b
·
verified ·
1 Parent(s): 82f9809

End of training

Browse files
Files changed (2) hide show
  1. README.md +106 -0
  2. generation_config.json +13 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3
4
+ base_model: mrcuddle/Dark-Hermes3-Llama3.2-3B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - NousResearch/hermes-function-calling-v1
10
+ model-index:
11
+ - name: Dark-Hermes3-Llama3.2-3B-Func
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.8.0.dev0`
22
+ ```yaml
23
+ base_model: mrcuddle/Dark-Hermes3-Llama3.2-3B
24
+ hub_model_id: mrcuddle/Dark-Hermes3-Llama3.2-3B-Func
25
+ dataloader_num_workers: 8
26
+ datasets:
27
+ - chat_template: alpaca
28
+ field_messages: conversations
29
+ message_property_mappings:
30
+ content: value
31
+ role: from
32
+ path: NousResearch/hermes-function-calling-v1
33
+ split: train
34
+ type: chat_template
35
+ eval_steps: 500
36
+ evaluation_strategy: steps
37
+ fp16: true
38
+ gradient_accumulation_steps: 4
39
+ gradient_checkpointing: true
40
+ learning_rate: 2e-5
41
+ logging_dir: /content/outputs/logs
42
+ logging_steps: 50
43
+ lr_scheduler: linear
44
+ lr_scheduler_type: linear
45
+ micro_batch_size: 2
46
+ num_train_epochs: 3
47
+ optimizer: adamw_torch # Or another optimizer of your choice
48
+ output_dir: /content/outputs
49
+ overwrite_output_dir: true
50
+ per_device_train_batch_size: 8
51
+ save_steps: 500
52
+ save_total_limit: 2
53
+ use_peft: false
54
+ val_set_size: 0.05
55
+ warmup_steps: 100
56
+ unsloth: true # Enable Unsloth if supported by your training framework
57
+
58
+ ```
59
+
60
+ </details><br>
61
+
62
+ # Dark-Hermes3-Llama3.2-3B-Func
63
+
64
+ This model is a fine-tuned version of [mrcuddle/Dark-Hermes3-Llama3.2-3B](https://huggingface.co/mrcuddle/Dark-Hermes3-Llama3.2-3B) on the NousResearch/hermes-function-calling-v1 dataset.
65
+
66
+ ## Model description
67
+
68
+ More information needed
69
+
70
+ ## Intended uses & limitations
71
+
72
+ More information needed
73
+
74
+ ## Training and evaluation data
75
+
76
+ More information needed
77
+
78
+ ## Training procedure
79
+
80
+ ### Training hyperparameters
81
+
82
+ The following hyperparameters were used during training:
83
+ - learning_rate: 2e-05
84
+ - train_batch_size: 2
85
+ - eval_batch_size: 2
86
+ - seed: 42
87
+ - gradient_accumulation_steps: 4
88
+ - total_train_batch_size: 8
89
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
90
+ - lr_scheduler_type: linear
91
+ - lr_scheduler_warmup_steps: 100
92
+ - num_epochs: 1.0
93
+
94
+ ### Training results
95
+
96
+ | Training Loss | Epoch | Step | Validation Loss |
97
+ |:-------------:|:------:|:----:|:---------------:|
98
+ | No log | 0.0889 | 1 | 0.3864 |
99
+
100
+
101
+ ### Framework versions
102
+
103
+ - Transformers 4.49.0
104
+ - Pytorch 2.5.1+cu124
105
+ - Datasets 3.2.0
106
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 128001,
7
+ 128008,
8
+ 128009
9
+ ],
10
+ "temperature": 0.6,
11
+ "top_p": 0.9,
12
+ "transformers_version": "4.49.0"
13
+ }