Edit model card

Barcenas Llama3 8b ORPO

Model trained with the novel new ORPO method, based on the recent Llama 3 8b, specifically: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct

The model was trained with the dataset: reciperesearch/dolphin-sft-v0.1-preference which uses Dolphin data with GPT 4 to improve its conversation sections.

Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐Ÿ‡ฒ๐Ÿ‡ฝ

Downloads last month
8,122
Safetensors
Model size
8.03B params
Tensor type
FP16
ยท
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Danielbrdz/Barcenas-Llama3-8b-ORPO 1