Prikol
I don't even know anymore
Overview
99% of mergekit addicts quit before they hit it big.
Gosh, I need to create an org for my test runs - my profile looks like a dumpster.
What was it again? Ah, the new model.
Exactly what I wanted. All I had to do was yank out the cursed official DeepSeek distill and here we are.
From the brief tests it gave me some unusual takes on the character cards I'm used to. Just this makes it worth it imo. Also the writing is kinda nice.
Settings:
Prompt format: Llama3
Samplers: 1.15 temp, 0.015 minP (by @Geechan)
Quants
Merge Details
There's a ridiculous amount of insensible mergekit configs, but it could be worse, believe me.
Anyway, here are all the merge steps for this thing combined:
Test-Step1
merge_method: della_linear
dtype: bfloat16
parameters:
normalize: true
int8_mask: true
tokenizer_source: base
base_model: Sao10K/L3.3-70B-Euryale-v2.3
models:
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
density: 0.55
weight: 1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
density: 0.55
weight: 1
- model: TheDrummer/Anubis-70B-v1
parameters:
density: 0.55
weight: 1
Test-Step2
models:
- model: mergekit-community/L3.3-Test-Step1
- model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
parameters:
density: 0.75
gamma: 0.01
weight: 0.05
- model: Nohobby/AbominationSnowPig
parameters:
density: 0.77
gamma: 0.007
weight: 0.07
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
density: 0.88
gamma: 0.008
weight: 0.28
base_model: mergekit-community/L3.3-Test-Step1
merge_method: breadcrumbs_ties
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: base
ASP-Step1
models:
- model: pankajmathur/orca_mini_v9_3_70B
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 1
density: 0.55
gamma: 0.03
- model: Undi95/Sushi-v1.4
parameters:
weight: 0.069
gamma: 0.001
density: 0.911
merge_method: breadcrumbs
base_model: pankajmathur/orca_mini_v9_3_70B
parameters:
int8_mask: true
rescale: true
normalize: true
dtype: bfloat16
tokenizer_source: base
AbominationSnowPig
dtype: bfloat16
tokenizer_source: base
merge_method: nuslerp
parameters:
nuslerp_row_wise: true
models:
- model: unsloth/Llama-3.3-70B-Instruct
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
- filter: up_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- value: 0
- model: Step1
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
- filter: up_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- value: 1
Prkiol-v0.1
base_model: TheDrummer/Anubis-70B-v1
parameters:
epsilon: 0.04
lambda: 1.05
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: union
merge_method: della_linear
models:
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: [0.2, 0.3, 0.2, 0.3, 0.2]
density: [0.45, 0.55, 0.45, 0.55, 0.45]
- model: Blackroot/Mirai-3.0-70B
parameters:
weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
density: [0.6, 0.4, 0.5, 0.4, 0.6]
- model: Sao10K/L3.3-70B-Euryale-v2.3
parameters:
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
density: [0.7]
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
parameters:
weight: [0.33]
density: [0.45, 0.55, 0.45, 0.55, 0.45]
Prkiol-v0.2
base_model: AbominationSnowPig
merge_method: model_stock
dtype: bfloat16
models:
- model: Sao10K/70B-L3.3-Cirrus-x1
- model: Nohobby/L3.3-Prikol-70B-v0.1a
Iamactuallygoinginsane
base_model: mergekit-community/L3.3-Test-Step2
merge_method: model_stock
tokenizer_source: base
dtype: bfloat16
models:
- model: Nohobby/L3.3-Prikol-70B-v0.2
- model: Black-Ink-Guild/Pernicious_Prophecy_70B
Imightbeontosmthing (Prikol-v0.5)
models:
- model: Nohobby/Iamactuallygoinginsane
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
density: 0.7
weight: 0.05
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
density: 0.5
weight: 0.1
base_model: Nohobby/Iamactuallygoinginsane
merge_method: ties
parameters:
normalize: true
dtype: bfloat16
- Downloads last month
- 19
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Darkhn/L3.3-Prikol-V0.5-6.0bpw-h8-exl2
Merge model
this model