mfirth's picture
Upload folder using huggingface_hub
e46f41e verified
metadata
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
  - generated_from_trainer
datasets:
  - shuffled_output_2.json
model-index:
  - name: models/llama_wm_v3_3
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.5.3.dev44+g5bef1906

base_model: meta-llama/Llama-3.2-3B-Instruct

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true

datasets:
  - path: shuffled_output_2.json
    type: input_output
dataset_prepared_path: last_run_prepared
dataset_exact_deduplication: false

# sequence_length: 131072
# pad_to_sequence_len: true
    
output_dir: ./models/llama_wm_v3_3

wandb_project: agent-v0
wandb_name: llama-3b_wm_v3_3

train_on_inputs: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 1
optimizer: adamw_torch
learning_rate: 2e-5
xformers_attention:
flash_attention: true

logging_steps: 5

warmup_steps: 10
saves_per_epoch: 1
weight_decay: 0.0

deepspeed: axolotl/deepspeed_configs/zero3_bf16_cpuoffload_all.json

special_tokens:
  pad_token: <|end_of_text|>

models/llama_wm_v3_3

This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on the shuffled_output_2.json dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1

Training results

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.21.0