🧪 Just Another Model Experiment
This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
Mistral-Nemo-Prism-12B
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.
The goal was to reduce archaic language and purple prose in a completely uncensored model.
Method
ORPO tuned with 8x A40 for 2 epochs.
- Downloads last month
- 124
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.