--- library_name: peft tags: - generated_from_trainer model-index: - name: qlora-yi-6b-200k-aezakmi-dpo-v2-run1 results: [] datasets: - adamo1139/AEZAKMI_v2 - adamo1139/rawrr_v1 license: apache-2.0 --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.3.0` ```yaml base_model: ./yi-6b-200k-rawrr-run2 base_model_config: ./yi-6b-200k-rawrr-run2 model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: false is_llama_derived_model: true load_in_8bit: false load_in_4bit: true bnb_4bit_use_double_quant: true buse_double_quants: true bnb_4bit_compute_dtype: torch.bfloat16 torch_dtype: bf16 strict: false datasets: - path: /run/.../axolotl/datasets/aezakmi_v2/aezakmi_v2_draft2.jsonl type: alpaca_w_system2.load_open_orca_chatml conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.01 adapter: qlora lora_model_dir: sequence_len: 8192 sample_packing: true lora_r: 16 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_watch: wandb_run_id: wandb_log_model: output_dir: ./qlora-yi-6b-200k-aezakmi-dpo-v2-run1 pad_to_sequence_len: true micro_batch_size: 1 gradient_accumulation_steps: 1 num_epochs: 4 optimizer: adamw_bnb_8bit torchdistx_path: lr_scheduler: constant learning_rate: 0.00008 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false bfloat16: true flash_optimum: false gradient_checkpointing: true early_stopping_patience: save_safetensors: local_rank: logging_steps: 1 xformers_attention: flash_attention: true deepspeed: seed: 42 warmup_steps: 100 eval_steps: 5000000 save_steps: 500 save_total_limit: 10 eval_table_size: eval_table_max_new_tokens: debug: weight_decay: fsdp: fsdp_config: special_tokens: bos_token: "<|startoftext|>" eos_token: "<|endoftext|>" unk_token: "" ```

# Yi-6b-200k-AEZAKMI-v2-rawrr1 Yi 6B 200k > treated with DPO on rawrr v1 dataset (QLoRA) > treated with SFT on AEZAKMI v2 dataset \ DPO training took around 2 hours. SFT training took around 12 hours. All done on RTX 3090 Ti locally. Fine-tuning config is exactly the same as for my previous finetune [adamo1139/Yi-6B-200K-AEZAKMI-v2](https://huggingface.co/adamo1139/Yi-6B-200K-AEZAKMI-v2) \ I just changed the base model from yi-6b-200k to yi-6b-200k fine-tuned with DPO ## Intended uses & limitations It's my first DPO+SFT finetune, so there might be some issues. So far I like this model a lot. No refusals encountered so far. ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Framework versions - PEFT 0.7.0 - Transformers 4.37.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0