--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B tags: - axolotl - generated_from_trainer model-index: - name: Qwen2.5-Coder-7B-Erebus-FIM results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-Coder-7B trust_remote_code: false load_in_8bit: false load_in_4bit: false strict: false datasets: - path: nyxkrage/erebus-87k-fim-8k data_files: data/* type: field_instruction: prefix field_input: suffix field_output: middle format: "<|fim_suffix|>{input}<|fim_prefix|>{instruction}<|fim_middle|>" dataset_prepared_path: val_set_size: 0 output_dir: /workspace/data/output sequence_len: 8192 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 256 lora_alpha: 256 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: qwen2.5-coder-7b-erebus-fim wandb_entity: kragelund wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 4 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00005 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true hub_model_id: NyxKrage/Qwen2.5-Coder-7B-Erebus-FIM hub_strategy: all_checkpoints gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: false flash_attention: true warmup_steps: 100 evals_per_epoch: 1 saves_per_epoch: 4 debug: deepspeed: deepspeed_configs/zero2.json weight_decay: 0.1 special_tokens: ```

# Qwen2.5-Coder-7B-Erebus-FIM This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1