Edit model card

NeuralMaxime 7b DPO

DPO Intel - Orca

Merge - MergeKit

Models : NeuralMonarch & AlphaMonarch (MLabonne)

Downloads last month
564
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Dataset used to train Kukedlc/NeuralMaxime-7B-DPO