See axolotl config
axolotl version: 0.4.1
base_model: google/gemma-2b-it
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# tokenizer_config: google/gemma-2b-it
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /scratch/jiarui14/dpo-ood/LLM-finetune/WX-Reward-Modeling/pair-pm/ultrafeedback-single
conversation: gemma
type: sharegpt.load_ultrachat
split: "train"
train_on_split: "train"
warmup_steps: 40
val_set_size: 0.0
output_dir: ./pm_models/gemma-2b-it_lr1e-5_ultrafeedback
#wandb_project: preference-models
#wandb_entity: domain-generalization
wandb_watch:
wandb_name: "gemma-2b-it_lr1e-5_ultrafeedback"
#_response_only
wandb_log_model:
train_on_inputs: false
save_safetensors: true
#noisy_embedding_alpha: 10.0 # default for sharegpt type
dataset_prepared_path: data/gemma-2b-it/ultrafeedback
dataset_processes: 48
#torch_compile: true
sequence_len: 3072
sample_packing: true
pad_to_sequence_len: true
trust_remote_code: True
adapter:
lora_model_dir:
gradient_checkpointing: false
#warmup_ratio: 0.1
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 1e-5
weight_decay: 0.0
max_grad_norm: 1.0
group_by_length: false
bf16: auto
fp16: false
tf32: true
early_stopping_patience:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
eval_steps:
eval_table_size:
eval_table_max_new_tokens:
save_steps: 50
save_strategy: "steps"
save_total_limit: 2
debug:
ddp: #true
deepspeed: #deepspeed/zero1.json # multi-gpu only
fsdp:
fsdp_config:
special_tokens:
pm_models/gemma-2b-it_lr1e-5_ultrafeedback
This model is a fine-tuned version of google/gemma-2b-it on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 1
Training results
Framework versions
- Transformers 4.43.3
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 9
Model tree for FlippyDora/gemma-2b-it_lr1e-5_ultrafeedback
Base model
google/gemma-2b-it