dyang415 commited on
Commit
159f4b4
1 Parent(s): 4d68066

End of training

Browse files
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
8
+ model-index:
9
+ - name: mixtral-pb
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
22
+ model_type: AutoModelForCausalLM
23
+ tokenizer_type: LlamaTokenizer
24
+ trust_remote_code: true
25
+
26
+ load_in_8bit: false
27
+ load_in_4bit: true
28
+ strict: false
29
+ chat_template: inst
30
+
31
+ datasets:
32
+ - path: ./data/pablo_processed.jsonl
33
+ type: sharegpt
34
+ conversation: mistral
35
+ # - path: ./data/tool_used_training.jsonl
36
+ # type: sharegpt
37
+ # conversation: mistral
38
+ # - path: ./data/tool_not_used_training.jsonl
39
+ # type: sharegpt
40
+ # conversation: mistral
41
+ # - path: ./data/no_tools_training.jsonl
42
+ # type: sharegpt
43
+ # conversation: mistral
44
+
45
+ hub_model_id: dyang415/mixtral-pb
46
+
47
+
48
+ dataset_prepared_path: last_run_prepared
49
+ val_set_size: 0.0
50
+ output_dir: ../mixtral-pb
51
+
52
+ model_config:
53
+ output_router_logits: true
54
+
55
+ adapter: qlora
56
+ lora_model_dir:
57
+
58
+ sequence_len: 16384
59
+ sample_packing: true
60
+ pad_to_sequence_len: true
61
+
62
+ lora_r: 8
63
+ lora_alpha: 16
64
+ lora_dropout: 0.05
65
+ lora_target_modules:
66
+ - q_proj
67
+ - k_proj
68
+ - v_proj
69
+ - o_proj
70
+
71
+
72
+ # wandb_project: function-call
73
+ # wandb_name: mixtral-instruct-lora--v1
74
+ # wandb_log_model: end
75
+ # hub_model_id: dyang415/mixtral-lora-v0
76
+
77
+
78
+ gradient_accumulation_steps: 2
79
+ micro_batch_size: 1
80
+ num_epochs: 10
81
+ optimizer: paged_adamw_8bit
82
+ lr_scheduler: cosine
83
+ learning_rate: 0.0002
84
+
85
+ train_on_inputs: false
86
+ group_by_length: false
87
+ bf16: true
88
+ fp16: false
89
+ tf32: false
90
+
91
+ gradient_checkpointing: true
92
+ logging_steps: 1
93
+ flash_attention: true
94
+
95
+ loss_watchdog_threshold: 5.0
96
+ loss_watchdog_patience: 3
97
+
98
+ warmup_steps: 10
99
+ evals_per_epoch: 4
100
+ eval_table_size:
101
+ eval_max_new_tokens: 128
102
+ saves_per_epoch: 1
103
+ debug:
104
+ weight_decay: 0.0
105
+ fsdp:
106
+ fsdp_config:
107
+
108
+ ```
109
+
110
+ </details><br>
111
+
112
+ # mixtral-pb
113
+
114
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
115
+
116
+ ## Model description
117
+
118
+ More information needed
119
+
120
+ ## Intended uses & limitations
121
+
122
+ More information needed
123
+
124
+ ## Training and evaluation data
125
+
126
+ More information needed
127
+
128
+ ## Training procedure
129
+
130
+
131
+ The following `bitsandbytes` quantization config was used during training:
132
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
133
+ - load_in_8bit: False
134
+ - load_in_4bit: True
135
+ - llm_int8_threshold: 6.0
136
+ - llm_int8_skip_modules: None
137
+ - llm_int8_enable_fp32_cpu_offload: False
138
+ - llm_int8_has_fp16_weight: False
139
+ - bnb_4bit_quant_type: nf4
140
+ - bnb_4bit_use_double_quant: True
141
+ - bnb_4bit_compute_dtype: bfloat16
142
+
143
+ ### Training hyperparameters
144
+
145
+ The following hyperparameters were used during training:
146
+ - learning_rate: 0.0002
147
+ - train_batch_size: 1
148
+ - eval_batch_size: 1
149
+ - seed: 42
150
+ - distributed_type: multi-GPU
151
+ - num_devices: 2
152
+ - gradient_accumulation_steps: 2
153
+ - total_train_batch_size: 4
154
+ - total_eval_batch_size: 2
155
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
156
+ - lr_scheduler_type: cosine
157
+ - lr_scheduler_warmup_steps: 10
158
+ - num_epochs: 10
159
+
160
+ ### Training results
161
+
162
+
163
+
164
+ ### Framework versions
165
+
166
+ - PEFT 0.7.0
167
+ - Transformers 4.37.0
168
+ - Pytorch 2.0.1+cu117
169
+ - Datasets 2.17.1
170
+ - Tokenizers 0.15.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8419f71f4cd662958c86bb5baa2da897fc1c3af007f5f2616cea4da5a65ff6db
3
+ size 27354957
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d122775d1c9a5432f519e225e4ed7d6f57412cfb77640bb5eaed4000526ec53c
3
  size 27297032
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b393f992d9f6c75c85338a520427456a68b612210f145ef06e89d351dd1a3f8c
3
  size 27297032
runs/Mar01_01-06-45_azure-jap/events.out.tfevents.1709255206.azure-jap.9873.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:23e8366591e8553910dd7c92f7f7757c2f2a62a5fc7bd1208c4bb916f14e8c46
3
- size 12379
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65f5b1779d17a81dbce544274ec51755a91ac9b444a5222894933bc21c8ef588
3
+ size 13497