--- license: other library_name: peft tags: - generated_from_trainer base_model: deepseek-ai/deepseek-coder-33b-instruct model-index: - name: lora-logo_fix_full_deepseek33b_ds33i_epoch3_lr_0.0002_alpha_512_r_512 results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml adapter: lora base_model: deepseek-ai/deepseek-coder-33b-instruct bf16: auto dataset_prepared_path: ./logo_ds_preprocess_list_gpt35 datasets: - path: ../logo/fix_deepseek_synthetic_training_data_full.jsonl type: field_instruction: input field_output: output format: '### Instruction: {input} ### Response: ' no_input_format: '{instruction}' debug: null deepspeed: ./deepspeed_configs/zero2.json early_stopping_patience: null eval_sample_packing: true evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: false is_llama_derived_model: true learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 512 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 512 lora_target_linear: true lr_scheduler: cosine micro_batch_size: 4 model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: ./lora-logo_fix_full_deepseek33b_ds33i_epoch3_lr_0.0002_alpha_512_r_512 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: true saves_per_epoch: 1 sequence_len: 1800 special_tokens: bos_token: "<\uFF5Cbegin\u2581of\u2581sentence\uFF5C>" eos_token: <|EOT|> strict: true tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false val_set_size: 0.05 wandb_entity: null wandb_log_model: null wandb_name: logo_fix_full_deepseek33b_ds33i_epoch3_lr_0.0002_alpha_512_r_512 wandb_project: pbe-axo wandb_watch: null warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ```

# lora-logo_fix_full_deepseek33b_ds33i_epoch3_lr_0.0002_alpha_512_r_512 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8824 | 0.0 | 1 | 1.9415 | | 0.4252 | 0.25 | 82 | 0.4346 | | 0.4111 | 0.5 | 164 | 0.4133 | | 0.4152 | 0.75 | 246 | 0.4052 | | 0.3872 | 1.0 | 328 | 0.3938 | | 0.3697 | 1.23 | 410 | 0.3914 | | 0.3583 | 1.47 | 492 | 0.3871 | | 0.3836 | 1.72 | 574 | 0.3798 | | 0.3363 | 1.97 | 656 | 0.3753 | | 0.2814 | 2.2 | 738 | 0.4040 | | 0.2186 | 2.45 | 820 | 0.3995 | | 0.2721 | 2.7 | 902 | 0.4047 | | 0.2561 | 2.95 | 984 | 0.4035 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0