Transformers
Inference Endpoints

exl2 quant (measurement.json in main branch)


check revisions for quants


image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 8x A40 for 2 epochs.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for NewEden/nbeerbower_Mistral-Nemo-Prism-12B-exl2

Finetuned
(10)
this model

Datasets used to train NewEden/nbeerbower_Mistral-Nemo-Prism-12B-exl2