LLAMA-3_8B_Unaligned_Alpha_RP_Soup
LLAMA-3_8B_Unaligned_Alpha_RP_Soup

Model Details

Censorship level: Medium

This model is the outcome of multiple merges, starting with the base model SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha. The merging process was conducted in several stages:

Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B.
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
Soup 1: Merge 1 was combined with Merge 2.
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
Mergekit configs:

Merge 1

slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: BeaverAI/Llama-3SOME-8B-v2d
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Merge 2

slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: invisietch/EtherealRainbow-v0.3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Soup 1

slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Final Merge

slices:
  - sources:
      - model: Soup 1
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: Soup 1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent.

LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations:

Model instruction template: (Can use either ChatML or Llama-3)

ChatML

<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer

Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Recommended generation Presets:

No idea, but sometimes Midnight Enigma gives nice results. max_new_tokens: 512

temperature: 0.98

top_p: 0.37

top_k: 100

typical_p: 1

min_p: 0

repetition_penalty: 1.18

do_sample: True

LLAMA-3_8B_Unaligned_Alpha_RP_Soup LLAMA-3_8B_Unaligned_Alpha_RP_Soup LLAMA-3_8B_Unaligned_Alpha_RP_Soup

*Sometimes the model might output a text that's too long.

The base model used for the merge - LLAMA-3_8B_Unaligned_Alpha - is available at the following quantizations:

Censorship level: Low - Medium

Support

GPUs too expensive
  • My Ko-fi page ALL donations will go for research resources and compute, every bit is appreciated ๐Ÿ™๐Ÿป
  • My Patreon ALL donations will go for research resources and compute, every bit appreciated ๐Ÿ™๐Ÿป

Other stuff

Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup

Quantizations
2 models

Collections including SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup