See axolotl config
axolotl version: 0.4.1
base_model: jeiku/completion4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/instructered4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
datasets:
- path: FourOhFour/Instruct_Phase
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: EXP4B
wandb_entity:
wandb_watch:
wandb_name: EXP4B
wandb_log_model:
gradient_accumulation_steps: 12
micro_batch_size: 3
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
instructered4B
This model is a fine-tuned version of jeiku/completion4B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.3713
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 72
- total_eval_batch_size: 6
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 68
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.336 | 0.0029 | 1 | 1.7114 |
0.9631 | 0.2516 | 86 | 1.4098 |
0.9347 | 0.5032 | 172 | 1.3828 |
0.9142 | 0.7548 | 258 | 1.3693 |
0.7967 | 1.0037 | 344 | 1.3659 |
0.7912 | 1.2551 | 430 | 1.3728 |
0.7957 | 1.5065 | 516 | 1.3730 |
0.7951 | 1.7579 | 602 | 1.3713 |
Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jeiku/instructered4B
Finetuned
jeiku/completion4B