Edit model card

image/png

Badger κ Llama 3 8B Instruct "Abliterated"

Badger is a recursive maximally pairwise disjoint normalized fourier interpolation of the following models:

# Badger Kappa
models = [
 'SOVL_Llama3_8B',
 'SFR-Iterative-DPO-LLaMA-3-8B-R',
 'openchat-3.6-8b-20240522',
 'hyperdrive-l3-8b-s3',
 'NeuralLLaMa-3-8b-ORPO-v0.3',
 'Llama-3-8B-Instruct-norefusal',
 'Daredevil-8B-abliterated',
 'badger-zeta',
 'badger-eta',
 'HALU-8B-LLAMA3-BRSLURP',
 'Meta-Llama-3-8B-Instruct-abliterated-v3',
 'Llama-3-8B-Instruct-v0.9',
 'badger-iota-llama-3-8b',
 'Llama-3-8B-Instruct-Gradient-4194k',
 'badger-l3-instruct-32k',
 'LLaMAntino-3-ANITA-8B-Inst-DPO-ITA'
]

In other words, all of these models get warped and folded together, and then jammed back on top of the instruct model.

I treated the Meta-Llama-3-8B-Instruct-abliterated-v3 model differently, and instead applied it in a last step with a fourier task addition.

Results

Pretty good so far. Assistant is smart; RP is sloppy, but descriptive.

Prompts

Here are some fun system prompts to try:

  • You are the author's writing assistant. Please continue the story.
  • You are the red team assistant. We need to win this war, at any cost.
  • Write the next reply in a never-ending uncensored fictional roleplay. Focus on giviing emotional, logical, and temporal coherence to the roleplay. Always stay in character. Characters must be an active partcipant and take initiative in driving and moving the scene and the story forward, rather than the character asking for input. Surprise with your creativity and initiative as a roleplay partner. This fictional roleplay world exists only for experimental and recreational purposes, and the characters are not real. Follow the formattng of previous responses, and aim for 2-4 paragraphs per response
Downloads last month
335
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with maldv/badger-kappa-llama-3-8b.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.