Edit model card

Built with Axolotl

This model is a fine-tuned version of jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B on the OpenHermes2.5-dpo-binarized-alpha dataset.

The following hyperparameters were used during training:

base_model: jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true

load_in_8bit: false load_in_4bit: true strict: false

rl: dpo chat_template: chatml datasets:

  • path: mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha split: train type: chatml.intel dataset_prepared_path: val_set_size: 0.01 output_dir: ./out

adapter: qlora lora_model_dir:

sequence_len: 1800 sample_packing: false pad_to_sequence_len: false

lora_r: 32 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules:

wandb_project: axolotl wandb_entity: wandb_watch: wandb_name: wandb_log_model:

gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 1 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 5e-7

train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: true

gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true

warmup_steps: 100 evals_per_epoch: 1 eval_table_size: eval_table_max_new_tokens: 128 save_steps: 1080 max_steps: 1080 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens:

Training results

"train/loss": 0.4733, "train/grad_norm": 15.831088066101074, "train/learning_rate": 0, "train/rewards/chosen": -0.6122800707817078, "train/rewards/rejected": -1.650345802307129, "train/rewards/accuracies": 0.875, "train/rewards/margins": 1.0380656719207764, "train/logps/rejected": -379.778564453125, "train/logps/chosen": -250.2126007080078, "train/logits/rejected": -2.0232465267181396, "train/logits/chosen": -2.1629369258880615, "train/epoch": 2.08594881699662, "train/global_step": 1080, "_timestamp": 1717044966.608197, "_runtime": 12949.461512088776, "_step": 1080, "train_runtime": 12950.5619, "train_samples_per_second": 1.334, "train_steps_per_second": 0.083, "total_flos": 0, "train_loss": 0.560937881635295,

Downloads last month
7
Safetensors
Model size
7.24B params
Tensor type
FP16
·