jeiku commited on
Commit
dd8aa6e
1 Parent(s): 1ea3d53

End of training

Browse files
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: jeiku/MoEv2
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ datasets:
8
+ - FourOhFour/RP_Phase
9
+ - jeiku/Writing
10
+ model-index:
11
+ - name: Aura-MoEv2
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: jeiku/MoEv2
24
+ model_type: AutoModelForCausalLM
25
+ tokenizer_type: AutoTokenizer
26
+
27
+ load_in_8bit: false
28
+ load_in_4bit: false
29
+ strict: false
30
+
31
+ datasets:
32
+ - path: FourOhFour/RP_Phase
33
+ type: chat_template
34
+ chat_template: chatml
35
+ roles_to_train: ["gpt"]
36
+ field_messages: conversations
37
+ message_field_role: from
38
+ message_field_content: value
39
+ train_on_eos: turn
40
+ - path: jeiku/Writing
41
+ type: completion
42
+ field: text
43
+
44
+ chat_template: chatml
45
+
46
+ shuffle_merged_datasets: true
47
+ dataset_prepared_path:
48
+ val_set_size: 0.01
49
+ output_dir: ./output/out
50
+
51
+ hub_model_id: jeiku/Aura-MoEv2
52
+ hub_strategy: "all_checkpoints"
53
+ push_dataset_to_hub:
54
+ hf_use_auth_token: true
55
+
56
+ sequence_len: 8192
57
+ sample_packing: true
58
+ eval_sample_packing: false
59
+ pad_to_sequence_len:
60
+
61
+ wandb_project: Aura-MoEv2
62
+ wandb_entity:
63
+ wandb_watch:
64
+ wandb_name: Aura-MoEv2
65
+ wandb_log_model:
66
+
67
+ gradient_accumulation_steps: 16
68
+ micro_batch_size: 2
69
+ num_epochs: 2
70
+ optimizer: paged_adamw_8bit
71
+ lr_scheduler: cosine
72
+ learning_rate: 0.00005
73
+
74
+ train_on_inputs: false
75
+ group_by_length: false
76
+ bf16: auto
77
+ fp16:
78
+ tf32: false
79
+
80
+ gradient_checkpointing: true
81
+ early_stopping_patience:
82
+ resume_from_checkpoint:
83
+ local_rank:
84
+ logging_steps: 1
85
+ xformers_attention:
86
+ flash_attention: true
87
+
88
+ warmup_steps: 10
89
+ evals_per_epoch: 2
90
+ eval_table_size:
91
+ eval_max_new_tokens:
92
+ saves_per_epoch: 1
93
+ debug:
94
+ deepspeed:
95
+ weight_decay: 0.05
96
+ fsdp:
97
+ fsdp_config:
98
+ special_tokens:
99
+ pad_token: <|finetune_right_pad_id|>
100
+ ```
101
+
102
+ </details><br>
103
+
104
+ # Aura-MoEv2
105
+
106
+ This model is a fine-tuned version of [jeiku/MoEv2](https://huggingface.co/jeiku/MoEv2) on the FourOhFour/RP_Phase and the jeiku/Writing datasets.
107
+ It achieves the following results on the evaluation set:
108
+ - Loss: 1.7106
109
+
110
+ ## Model description
111
+
112
+ More information needed
113
+
114
+ ## Intended uses & limitations
115
+
116
+ More information needed
117
+
118
+ ## Training and evaluation data
119
+
120
+ More information needed
121
+
122
+ ## Training procedure
123
+
124
+ ### Training hyperparameters
125
+
126
+ The following hyperparameters were used during training:
127
+ - learning_rate: 5e-05
128
+ - train_batch_size: 2
129
+ - eval_batch_size: 2
130
+ - seed: 42
131
+ - gradient_accumulation_steps: 16
132
+ - total_train_batch_size: 32
133
+ - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
+ - lr_scheduler_type: cosine
135
+ - lr_scheduler_warmup_steps: 10
136
+ - num_epochs: 2
137
+
138
+ ### Training results
139
+
140
+ | Training Loss | Epoch | Step | Validation Loss |
141
+ |:-------------:|:------:|:----:|:---------------:|
142
+ | 29.5342 | 0.0038 | 1 | 1.8693 |
143
+ | 27.8562 | 0.4990 | 130 | 1.7601 |
144
+ | 26.632 | 0.9981 | 260 | 1.6990 |
145
+ | 21.9675 | 1.4952 | 390 | 1.7117 |
146
+ | 21.648 | 1.9942 | 520 | 1.7106 |
147
+
148
+
149
+ ### Framework versions
150
+
151
+ - Transformers 4.47.0
152
+ - Pytorch 2.3.1+cu121
153
+ - Datasets 3.1.0
154
+ - Tokenizers 0.21.0
pytorch_model-00001-of-00003.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:502c7b972efb043c243e483e22070697d9a80e6802393234cc336a2d3277d5ee
3
  size 4991140832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf485baa6dc3bf6eddf4196908962e999bf837b8efb6a808531f9078890c1942
3
  size 4991140832
pytorch_model-00002-of-00003.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f473d4a0bffcc07919729dd40b7d3fc8056e817d6dbec660760f6f346964ec46
3
  size 4945584372
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be031efd96886319733a197b98aea25da09f62c0c4f4e72a95930b00b7042107
3
  size 4945584372
pytorch_model-00003-of-00003.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e255dd0c1b21d7b50c49bfd661ac150415ee7f741cb4687abdf62e7a281e49d
3
  size 4525524595
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e78c9ca18dcdddbbded8c5cf52df97c866ad90891813828b4d5ea243a78f866d
3
  size 4525524595