alxfgh commited on
Commit
e828fe2
1 Parent(s): 5daa040

Upload 13 files

Browse files
checkpoint-50/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: /home/talos/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
checkpoint-50/adapter_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/home/talos/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 32,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": [],
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": "^(model)(?!.*(lm_head|output|emb|wte|shared)).*",
23
+ "task_type": "CAUSAL_LM",
24
+ "use_dora": false,
25
+ "use_rslora": false
26
+ }
checkpoint-50/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b5348d533b9bbdac0b14e5754451da4023e01702b9179986ac06f610f2e2b09
3
+ size 80792096
checkpoint-50/additional_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lora_dtype": null, "lorap_lr_ratio": null, "lorap_emb_lr": 1e-06}
checkpoint-50/configuration.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task": "image-text-to-text",
4
+ "allow_remote": true,
5
+ "adapter_cfg": {
6
+ "model_id_or_path": "qwen/Qwen2-VL-7B-Instruct",
7
+ "model_revision": "master",
8
+ "sft_type": "lora",
9
+ "tuner_backend": "peft",
10
+ "template_type": "qwen2-vl",
11
+ "dtype": "bf16",
12
+ "system": "You are a helpful assistant."
13
+ }
14
+ }
checkpoint-50/generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": 151645,
5
+ "max_new_tokens": 2048,
6
+ "pad_token_id": 151643,
7
+ "repetition_penalty": 1.05,
8
+ "temperature": 0.1,
9
+ "top_k": 1,
10
+ "top_p": 0.001,
11
+ "transformers_version": "4.45.0.dev0"
12
+ }
checkpoint-50/infer_result/20240906-224206.jsonl ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {"system": "You are a helpful assistant.", "query": "What does this say? <img>", "response": "mg", "history": []}
2
+ {"system": "You are a helpful assistant.", "query": "dataset/synthetic_image_1_split1_gray_rot0.png", "response": "mg", "history": [["What does this say? <img>", "mg"]]}
3
+ {"system": "You are a helpful assistant.", "query": "dataset/synthetic_image_1_split1_gray_rot0.png", "response": "mg", "history": [["What does this say? <img>", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"]]}
4
+ {"system": "You are a helpful assistant.", "query": "<img> dataset/synthetic_image_1_split1_gray_rot0.png", "response": "mg", "history": [["What does this say? <img>", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"]]}
5
+ {"system": "You are a helpful assistant.", "query": "dataset/synthetic_image_1_split1_gray_rot0.png", "response": "mg", "history": [["What does this say? <img>", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["<img> dataset/synthetic_image_1_split1_gray_rot0.png", "mg"]]}
6
+ {"system": "You are a helpful assistant.", "query": "<image>what does this say?", "response": "Bupropion even\nthe whole tablet\nDo chew or crush\nmouth Samuel\ntablet Take\nAmelia 1 chew whole\nSwallow with\ntablet Joe not mg\n500 by", "history": [["What does this say? <img>", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["<img> dataset/synthetic_image_1_split1_gray_rot0.png", "mg"], ["dataset/synthetic_image_1_split1_gray_rot0.png", "mg"]], "images": ["dataset/synthetic_image_1_split1_gray_rot0.png"]}
checkpoint-50/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d556a059baaafdd1a1455eea8fba99a818d2423eeabcbe7eae9935ee3ccc41b
3
+ size 161810282
checkpoint-50/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b14d92d4a2ecb0dced424b936c0dff231c2d654ac7a4e6e609e8a56aee75b56
3
+ size 14244
checkpoint-50/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ca9ce9d57985d3f1155cd05e76b911b58af6e5e16c296131454f7efca5d9c8c
3
+ size 1064
checkpoint-50/sft_args.json ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "qwen2-vl-7b-instruct",
3
+ "model_id_or_path": "qwen/Qwen2-VL-7B-Instruct",
4
+ "model_revision": "master",
5
+ "full_determinism": false,
6
+ "sft_type": "lora",
7
+ "freeze_parameters": [],
8
+ "freeze_vit": false,
9
+ "freeze_parameters_ratio": 0.0,
10
+ "additional_trainable_parameters": [],
11
+ "tuner_backend": "peft",
12
+ "template_type": "qwen2-vl",
13
+ "output_dir": "/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917",
14
+ "add_output_dir_suffix": true,
15
+ "ddp_backend": null,
16
+ "ddp_find_unused_parameters": null,
17
+ "ddp_broadcast_buffers": null,
18
+ "ddp_timeout": 1800,
19
+ "seed": 42,
20
+ "resume_from_checkpoint": null,
21
+ "resume_only_model": false,
22
+ "ignore_data_skip": false,
23
+ "dtype": "bf16",
24
+ "packing": false,
25
+ "train_backend": "transformers",
26
+ "tp": 1,
27
+ "pp": 1,
28
+ "min_lr": null,
29
+ "sequence_parallel": false,
30
+ "model_kwargs": null,
31
+ "loss_name": null,
32
+ "dataset": [
33
+ "train.json"
34
+ ],
35
+ "val_dataset": [
36
+ "val.json"
37
+ ],
38
+ "dataset_seed": 42,
39
+ "dataset_test_ratio": 0.0,
40
+ "use_loss_scale": false,
41
+ "loss_scale_config_path": "/home/talos/Downloads/swift/swift/llm/agent/default_loss_scale_config.json",
42
+ "system": "You are a helpful assistant.",
43
+ "tools_prompt": "react_en",
44
+ "max_length": 2048,
45
+ "truncation_strategy": "delete",
46
+ "check_dataset_strategy": "none",
47
+ "streaming": false,
48
+ "streaming_val_size": 0,
49
+ "streaming_buffer_size": 16384,
50
+ "model_name": [
51
+ null,
52
+ null
53
+ ],
54
+ "model_author": [
55
+ null,
56
+ null
57
+ ],
58
+ "quant_method": null,
59
+ "quantization_bit": 0,
60
+ "hqq_axis": 0,
61
+ "hqq_dynamic_config_path": null,
62
+ "bnb_4bit_comp_dtype": "bf16",
63
+ "bnb_4bit_quant_type": "nf4",
64
+ "bnb_4bit_use_double_quant": true,
65
+ "bnb_4bit_quant_storage": null,
66
+ "rescale_image": -1,
67
+ "target_modules": "^(model)(?!.*(lm_head|output|emb|wte|shared)).*",
68
+ "target_regex": null,
69
+ "modules_to_save": [],
70
+ "lora_rank": 8,
71
+ "lora_alpha": 32,
72
+ "lora_dropout": 0.05,
73
+ "lora_bias_trainable": "none",
74
+ "lora_dtype": null,
75
+ "lora_lr_ratio": null,
76
+ "use_rslora": false,
77
+ "use_dora": false,
78
+ "init_lora_weights": true,
79
+ "fourier_n_frequency": 2000,
80
+ "fourier_scaling": 300.0,
81
+ "rope_scaling": null,
82
+ "boft_block_size": 4,
83
+ "boft_block_num": 0,
84
+ "boft_n_butterfly_factor": 1,
85
+ "boft_dropout": 0.0,
86
+ "vera_rank": 256,
87
+ "vera_projection_prng_key": 0,
88
+ "vera_dropout": 0.0,
89
+ "vera_d_initial": 0.1,
90
+ "adapter_act": "gelu",
91
+ "adapter_length": 128,
92
+ "use_galore": false,
93
+ "galore_target_modules": null,
94
+ "galore_rank": 128,
95
+ "galore_update_proj_gap": 50,
96
+ "galore_scale": 1.0,
97
+ "galore_proj_type": "std",
98
+ "galore_optim_per_parameter": false,
99
+ "galore_with_embedding": false,
100
+ "galore_quantization": false,
101
+ "galore_proj_quant": false,
102
+ "galore_proj_bits": 4,
103
+ "galore_proj_group_size": 256,
104
+ "galore_cos_threshold": 0.4,
105
+ "galore_gamma_proj": 2,
106
+ "galore_queue_size": 5,
107
+ "adalora_target_r": 8,
108
+ "adalora_init_r": 12,
109
+ "adalora_tinit": 0,
110
+ "adalora_tfinal": 0,
111
+ "adalora_deltaT": 1,
112
+ "adalora_beta1": 0.85,
113
+ "adalora_beta2": 0.85,
114
+ "adalora_orth_reg_weight": 0.5,
115
+ "ia3_feedforward_modules": [],
116
+ "llamapro_num_new_blocks": 4,
117
+ "llamapro_num_groups": null,
118
+ "neftune_noise_alpha": null,
119
+ "neftune_backend": "transformers",
120
+ "lisa_activated_layers": 0,
121
+ "lisa_step_interval": 20,
122
+ "reft_layer_key": null,
123
+ "reft_layers": null,
124
+ "reft_rank": 4,
125
+ "reft_intervention_type": "LoreftIntervention",
126
+ "reft_args": null,
127
+ "use_liger": false,
128
+ "gradient_checkpointing": true,
129
+ "deepspeed": null,
130
+ "batch_size": 1,
131
+ "eval_batch_size": 1,
132
+ "auto_find_batch_size": false,
133
+ "num_train_epochs": 1,
134
+ "max_steps": -1,
135
+ "optim": "adamw_torch",
136
+ "adam_beta1": 0.9,
137
+ "adam_beta2": 0.95,
138
+ "adam_epsilon": 1e-08,
139
+ "learning_rate": 0.0001,
140
+ "weight_decay": 0.1,
141
+ "gradient_accumulation_steps": 16,
142
+ "max_grad_norm": 1,
143
+ "predict_with_generate": false,
144
+ "lr_scheduler_type": "cosine",
145
+ "lr_scheduler_kwargs": {},
146
+ "warmup_ratio": 0.05,
147
+ "warmup_steps": 0,
148
+ "eval_steps": 50,
149
+ "save_steps": 50,
150
+ "save_only_model": false,
151
+ "save_total_limit": 2,
152
+ "logging_steps": 5,
153
+ "acc_steps": 1,
154
+ "dataloader_num_workers": 1,
155
+ "dataloader_pin_memory": true,
156
+ "dataloader_drop_last": false,
157
+ "push_to_hub": false,
158
+ "hub_model_id": null,
159
+ "hub_token": null,
160
+ "hub_private_repo": false,
161
+ "hub_strategy": "every_save",
162
+ "test_oom_error": false,
163
+ "disable_tqdm": false,
164
+ "lazy_tokenize": true,
165
+ "preprocess_num_proc": 1,
166
+ "use_flash_attn": null,
167
+ "ignore_args_error": false,
168
+ "check_model_is_latest": true,
169
+ "logging_dir": "/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917/runs",
170
+ "report_to": [
171
+ "tensorboard"
172
+ ],
173
+ "acc_strategy": "token",
174
+ "save_on_each_node": false,
175
+ "evaluation_strategy": "steps",
176
+ "save_strategy": "steps",
177
+ "save_safetensors": true,
178
+ "gpu_memory_fraction": null,
179
+ "include_num_input_tokens_seen": false,
180
+ "local_repo_path": null,
181
+ "custom_register_path": null,
182
+ "custom_dataset_info": null,
183
+ "device_map_config": null,
184
+ "device_max_memory": [],
185
+ "max_new_tokens": 2048,
186
+ "do_sample": null,
187
+ "temperature": null,
188
+ "top_k": null,
189
+ "top_p": null,
190
+ "repetition_penalty": null,
191
+ "num_beams": 1,
192
+ "fsdp": "",
193
+ "fsdp_config": null,
194
+ "sequence_parallel_size": 1,
195
+ "model_layer_cls_name": null,
196
+ "metric_warmup_step": 0,
197
+ "fsdp_num": 1,
198
+ "per_device_train_batch_size": null,
199
+ "per_device_eval_batch_size": null,
200
+ "eval_strategy": null,
201
+ "self_cognition_sample": 0,
202
+ "train_dataset_mix_ratio": 0.0,
203
+ "train_dataset_mix_ds": [
204
+ "ms-bench"
205
+ ],
206
+ "train_dataset_sample": -1,
207
+ "val_dataset_sample": null,
208
+ "safe_serialization": null,
209
+ "only_save_model": null,
210
+ "neftune_alpha": null,
211
+ "deepspeed_config_path": null,
212
+ "model_cache_dir": null,
213
+ "lora_dropout_p": null,
214
+ "lora_target_modules": [],
215
+ "lora_target_regex": null,
216
+ "lora_modules_to_save": [],
217
+ "boft_target_modules": [],
218
+ "boft_modules_to_save": [],
219
+ "vera_target_modules": [],
220
+ "vera_modules_to_save": [],
221
+ "ia3_target_modules": [],
222
+ "ia3_modules_to_save": [],
223
+ "custom_train_dataset_path": [],
224
+ "custom_val_dataset_path": [],
225
+ "device_map_config_path": null,
226
+ "push_hub_strategy": null,
227
+ "use_self_cognition": false,
228
+ "is_multimodal": true,
229
+ "is_vision": true,
230
+ "lora_use_embedding": false,
231
+ "lora_use_all": false,
232
+ "lora_m2s_use_embedding": false,
233
+ "lora_m2s_use_ln": false,
234
+ "torch_dtype": "torch.bfloat16",
235
+ "fp16": false,
236
+ "bf16": true,
237
+ "rank": -1,
238
+ "local_rank": -1,
239
+ "world_size": 1,
240
+ "local_world_size": 1,
241
+ "bnb_4bit_compute_dtype": "torch.bfloat16",
242
+ "load_in_4bit": false,
243
+ "load_in_8bit": false,
244
+ "train_sampler_random": true,
245
+ "training_args": "Seq2SeqTrainingArguments(output_dir='/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917', overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, eval_strategy=<IntervalStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=1, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=16, eval_accumulation_steps=None, eval_delay=0, torch_empty_cache_steps=None, learning_rate=0.0001, weight_decay=0.1, adam_beta1=0.9, adam_beta2=0.95, adam_epsilon=1e-08, max_grad_norm=1, num_train_epochs=1, max_steps=-1, lr_scheduler_type=<SchedulerType.COSINE: 'cosine'>, lr_scheduler_kwargs={}, warmup_ratio=0.05, warmup_steps=0, log_level='passive', log_level_replica='warning', log_on_each_node=True, logging_dir='/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917/runs', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=True, logging_steps=5, logging_nan_inf_filter=True, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=50, save_total_limit=2, save_safetensors=True, save_on_each_node=False, save_only_model=False, restore_callback_states_from_checkpoint=False, no_cuda=False, use_cpu=False, use_mps_device=False, seed=42, data_seed=None, jit_mode_eval=False, use_ipex=False, bf16=True, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32=None, local_rank=0, ddp_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=50, dataloader_num_workers=1, dataloader_prefetch_factor=None, past_index=-1, run_name='/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917', disable_tqdm=False, remove_unused_columns=False, label_names=None, load_best_model_at_end=False, metric_for_best_model='loss', greater_is_better=False, ignore_data_skip=False, fsdp=[], fsdp_min_num_params=0, fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_transformer_layer_cls_to_wrap=None, accelerator_config=AcceleratorConfig(split_batches=False, dispatch_batches=False, even_batches=True, use_seedable_sampler=True, non_blocking=False, gradient_accumulation_kwargs=None, use_configured_state=False), deepspeed=None, label_smoothing_factor=0.0, optim=<OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, optim_args=None, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard'], ddp_find_unused_parameters=None, ddp_bucket_cap_mb=None, ddp_broadcast_buffers=None, dataloader_pin_memory=True, dataloader_persistent_workers=False, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, hub_model_id=None, hub_strategy=<HubStrategy.EVERY_SAVE: 'every_save'>, hub_token=None, hub_private_repo=False, hub_always_push=False, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, include_inputs_for_metrics=False, eval_do_concat_batches=True, fp16_backend='auto', evaluation_strategy=None, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=1800, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, dispatch_batches=None, split_batches=None, include_tokens_per_second=False, include_num_input_tokens_seen=False, neftune_noise_alpha=None, optim_target_modules=None, batch_eval_metrics=False, eval_on_start=False, use_liger_kernel=False, eval_use_gather_object=False, sortish_sampler=True, predict_with_generate=False, generation_max_length=None, generation_num_beams=None, generation_config=GenerationConfig {\n \"bos_token_id\": 151643,\n \"do_sample\": true,\n \"eos_token_id\": 151645,\n \"max_new_tokens\": 2048,\n \"pad_token_id\": 151643,\n \"repetition_penalty\": 1.05,\n \"temperature\": 0.1,\n \"top_k\": 1,\n \"top_p\": 0.001\n}\n, train_sampler_random=True, acc_strategy='token', loss_name=None, additional_saved_files=[], metric_warmup_step=0, train_dataset_sample=-1)"
246
+ }
checkpoint-50/trainer_state.json ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.66251701,
3
+ "best_model_checkpoint": "/home/talos/Downloads/swift/output/qwen2-vl-7b-instruct/v1-20240906-214917/checkpoint-50",
4
+ "epoch": 0.11871197507048524,
5
+ "eval_steps": 50,
6
+ "global_step": 50,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "acc": 0.43273938,
13
+ "epoch": 0.0023742395014097048,
14
+ "grad_norm": 2.6836464405059814,
15
+ "learning_rate": 4.5454545454545455e-06,
16
+ "loss": 3.39143324,
17
+ "memory(GiB)": 17.6,
18
+ "step": 1,
19
+ "train_speed(iter/s)": 0.109554
20
+ },
21
+ {
22
+ "acc": 0.42630106,
23
+ "epoch": 0.011871197507048523,
24
+ "grad_norm": 2.722611904144287,
25
+ "learning_rate": 2.272727272727273e-05,
26
+ "loss": 3.6675992,
27
+ "memory(GiB)": 18.52,
28
+ "step": 5,
29
+ "train_speed(iter/s)": 0.11712
30
+ },
31
+ {
32
+ "acc": 0.4359364,
33
+ "epoch": 0.023742395014097046,
34
+ "grad_norm": 2.5712335109710693,
35
+ "learning_rate": 4.545454545454546e-05,
36
+ "loss": 3.49505234,
37
+ "memory(GiB)": 18.52,
38
+ "step": 10,
39
+ "train_speed(iter/s)": 0.118467
40
+ },
41
+ {
42
+ "acc": 0.52706037,
43
+ "epoch": 0.03561359252114557,
44
+ "grad_norm": 3.1658146381378174,
45
+ "learning_rate": 6.818181818181818e-05,
46
+ "loss": 2.71751995,
47
+ "memory(GiB)": 18.52,
48
+ "step": 15,
49
+ "train_speed(iter/s)": 0.118777
50
+ },
51
+ {
52
+ "acc": 0.63293548,
53
+ "epoch": 0.04748479002819409,
54
+ "grad_norm": 2.9896538257598877,
55
+ "learning_rate": 9.090909090909092e-05,
56
+ "loss": 1.89354668,
57
+ "memory(GiB)": 18.52,
58
+ "step": 20,
59
+ "train_speed(iter/s)": 0.118991
60
+ },
61
+ {
62
+ "acc": 0.66501107,
63
+ "epoch": 0.05935598753524262,
64
+ "grad_norm": 4.480954647064209,
65
+ "learning_rate": 9.998605186060137e-05,
66
+ "loss": 1.68407936,
67
+ "memory(GiB)": 18.52,
68
+ "step": 25,
69
+ "train_speed(iter/s)": 0.119136
70
+ },
71
+ {
72
+ "acc": 0.75791483,
73
+ "epoch": 0.07122718504229114,
74
+ "grad_norm": 2.67279314994812,
75
+ "learning_rate": 9.990084141112673e-05,
76
+ "loss": 1.15793657,
77
+ "memory(GiB)": 18.52,
78
+ "step": 30,
79
+ "train_speed(iter/s)": 0.119299
80
+ },
81
+ {
82
+ "acc": 0.78989563,
83
+ "epoch": 0.08309838254933967,
84
+ "grad_norm": 2.6674764156341553,
85
+ "learning_rate": 9.973830136604067e-05,
86
+ "loss": 0.96687098,
87
+ "memory(GiB)": 18.52,
88
+ "step": 35,
89
+ "train_speed(iter/s)": 0.119309
90
+ },
91
+ {
92
+ "acc": 0.78122487,
93
+ "epoch": 0.09496958005638818,
94
+ "grad_norm": 5.596856117248535,
95
+ "learning_rate": 9.949868360798893e-05,
96
+ "loss": 1.00073462,
97
+ "memory(GiB)": 18.52,
98
+ "step": 40,
99
+ "train_speed(iter/s)": 0.119402
100
+ },
101
+ {
102
+ "acc": 0.84813137,
103
+ "epoch": 0.10684077756343671,
104
+ "grad_norm": 2.6154444217681885,
105
+ "learning_rate": 9.918235946426388e-05,
106
+ "loss": 0.66644673,
107
+ "memory(GiB)": 18.52,
108
+ "step": 45,
109
+ "train_speed(iter/s)": 0.119479
110
+ },
111
+ {
112
+ "acc": 0.81814995,
113
+ "epoch": 0.11871197507048524,
114
+ "grad_norm": 3.085695743560791,
115
+ "learning_rate": 9.878981913137179e-05,
116
+ "loss": 0.72966509,
117
+ "memory(GiB)": 18.52,
118
+ "step": 50,
119
+ "train_speed(iter/s)": 0.119439
120
+ },
121
+ {
122
+ "epoch": 0.11871197507048524,
123
+ "eval_acc": 0.8361565032549488,
124
+ "eval_loss": 0.6625170111656189,
125
+ "eval_runtime": 164.2627,
126
+ "eval_samples_per_second": 4.56,
127
+ "eval_steps_per_second": 4.56,
128
+ "step": 50
129
+ }
130
+ ],
131
+ "logging_steps": 5,
132
+ "max_steps": 421,
133
+ "num_input_tokens_seen": 0,
134
+ "num_train_epochs": 1,
135
+ "save_steps": 50,
136
+ "stateful_callbacks": {
137
+ "TrainerControl": {
138
+ "args": {
139
+ "should_epoch_stop": false,
140
+ "should_evaluate": false,
141
+ "should_log": false,
142
+ "should_save": true,
143
+ "should_training_stop": false
144
+ },
145
+ "attributes": {}
146
+ }
147
+ },
148
+ "total_flos": 1.4545111068791808e+16,
149
+ "train_batch_size": 1,
150
+ "trial_name": null,
151
+ "trial_params": null
152
+ }
checkpoint-50/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87f32068e5bb4ae7b9c7ad89866db7bd82d22a3a827add49b29a7a1c252c2af5
3
+ size 7352