Edit model card

This variant of the model has undergone reinforcement learning (RL) fine-tuning and is based on teknium/OpenHermes-2.5-Mistral-7B. The fine-tuning process utilized a preference dataset derived from HuggingFace's no robots dataset, incorporating Differential Privacy Optimization (DPO) techniques.

Downloads last month
3,067
Safetensors
Model size
7.24B params
Tensor type
F32
·