MarcoroCapy-7B

This model is a DPO fine tune of mlabonne/Marcoro14-7B-slerp on argilla/distilabel-capybara-dpo-7k-binarized

image/webp

Built with Distilabel

Process

  • Realigned the chat template to ChatML
  • Completed 1 Epoch
  • 5e-5 learning rate
  • Training time was about 4.5 hours on 1 H100
  • Cost was ~$20

GGUF

TODO

Evaluations

TODO

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for macadeliccc/MarcoroCapy-7B

Quantizations
2 models