nbeerbower's picture
Update README.md
35cf69f verified
|
raw
history blame
997 Bytes
metadata
library_name: transformers
license: apache-2.0
base_model:
  - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
  - nbeerbower/Arkhaios-DPO
  - nbeerbower/Purpura-DPO

image/png

Mistral-Nemo-Prism-12B-v7

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 8x A40 for 10 epochs.

For this version, beta was increased to 2.

In conclusion, LoRA does not seem to be able to completely remove some of the language issues deeply embedded in the model.