Edit model card

image/png

OpenHermes 2.5 Stix Philosophy Mistral 7B


LoRA rank: 8
LoRA alpha: 16
LoRA dropout: 0
Rank-stabilized LoRA: Yes
Number of epochs: 3
Learning rate: 1e-5
Batch size: 2
Gradient accumulation steps: 4
Weight decay: 0.01
Target modules:

  - Query projection (`q_proj`)
  - Key projection (`k_proj`)
  - Value projection (`v_proj`)
  - Output projection (`o_proj`)
  - Gate projection (`gate_proj`)
  - Up projection (`up_proj`)
  - Down projection (`down_proj`)
Downloads last month
512
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Quantized from

Dataset used to train sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA