Safetensors
gemma2
8-bit precision
exl2

image/png

GGUF: Here

EXL2 6.0BPW (Thx Lucy <3): Here

More quants will be up soon.

Model Details

I took Gemma-2 base and trained a LoRa with 2 million tokens worth of Claude data and merged via Axolotl CLI Data used was similar to what Magnum Models are trained off hence Claude Shannon for the card image.

Prompting

In testing it worked well with basic sampler settings (specifically i used Simple~1 included within ST); it was coherent and stable throughout my testing aswell as being quite proactive. I used the Gemma2 format provided in SillyTavern to test and i found no refusals within RP even when doing extreme NSFW activites - When i was using it as an assistant though i found many refusals but all of them were easily dealt with by using MooreRP, a custom prompt / context template to uncensor the model

MooreRP links: Context Template: https://files.catbox.moe/b1lpao.json Instruct Mode: https://files.catbox.moe/21joxa.json

(Made by @a.lk on Discord)

Config

LoRa for this model was trained in Axoltol for 2 epochs at a rank of 32 and a LR of 2e-5 on 2xRTX 6000s (Provided by Kubernetes Bad) and using the CustomGemma2 Prompt strategy

Credits

Thanks to Kubernetes Bad for providing compute for this train, Lucy Knada, Nopm, Kalomaze and the rest of Anthracite for providing help to do the train. (But not Alpin)

Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train Edens-Gate/Ohashi-9B-V1-EXl2-6.0BPW