GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints

image/png

[HIGHLY EXPERIMENTAL]

Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.

Highly uncensored.

If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them that trigger the censoring accross all the layer of the model (since they're all trained on some of them in a way).

based off Unholy-12L : This is a test project, uukuguy/speechless-llama2-luban-orca-platypus-13b and jondurbin/spicyboros-13b-2.2 was used for a merge, then, I deleted the first 8 layers to add 8 layers of MLewd at the beginning, and do the same from layers 16 to 20, trying to break all censoring possible, before merging the output with MLewd at 0.33 weight.

Description

This repo contains quantized files of Unholy v1.1, an uncensored model.

Models and loras used

  • uukuguy/speechless-llama2-luban-orca-platypus-13b
  • jondurbin/spicyboros-13b-2.2
  • Undi95/MLewd-L2-13B-v2-3
  • ausboss/llama2-13b-supercot-loras

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that completes the request.

### Instruction:
{prompt}

### Response:

Prompt template by default

<prompt>

Reply:
Downloads last month
4
GGUF
Model size
13B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .