Can you teach me how to organize the dataset?你能教我怎么组织数据集吗?
#1
by
sherlockbt
- opened
你能指导我一下怎么把你提及的数据集转换成训练的数据格式的,我想复现你的训练结果
Can you guide me how to convert the dataset you mentioned into the training data format? I want to reproduce your training results.
I can! I did it in a few stages. Fumbling in darkness at the time. Here are the config files chronologically:
### model
model_name_or_path: Qwen/Qwen2-0.5B
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
loraplus_lr_ratio: 16.0
lora_rank: 8
lora_alpha: 16
use_unsloth: true
seed: 3141
### dataset
dataset: ninja_diffusion
template: hermes_chatml (ChatML without system)
cutoff_len: 1024
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: qwen2/0.5B/sd-prompter
logging_steps: 1
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 8
gradient_accumulation_steps: 4
learning_rate: 0.0001
num_train_epochs: 4.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2
### eval
val_size: 0.02
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 200
### model
model_name_or_path: qwen2/0.5B/sd-prompter
### method
stage: pt
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 10
seed: 31415
### dataset
dataset: stable_diffusion_pt (Gustavosta/Stable-Diffusion-Prompts)
cutoff_len: 1024
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: qwen2/0.5B/SD-base
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 0.00001
num_train_epochs: 2
lr_scheduler_type: cosine
warmup_ratio: 0.01
pure_bf16: true
flash_attn: fa2
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
### model
model_name_or_path: qwen2/0.5B/SD-base
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
use_dora: true
lora_rank: 16
lora_alpha: 16
use_unsloth: true
seed: 3141
additional_target: lm_head,embed_tokens
### dataset
dataset: ninja_diffusion,sd_concepts (the sd-concepts dataset is a json file in this repo)
template: hermes_chatml
cutoff_len: 1024
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: qwen2/0.5B/Qwen2-DiffusionPrompter-v0.1
logging_steps: 1
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 8
learning_rate: 0.00005
num_train_epochs: 2
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.01
bf16: true
flash_attn: fa2
### eval
val_size: 0.05
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 100