Edit model card

This is a merge of pre-trained language models created using mergekit.

Merge Details

WARNING: There's actually a reason for the not-for-all-audiences tag on this one.

Qwen2.5 was much more refusal-censored in the first place compared to Mistral Nemo, but abliteration adjusts that.

(They're still probably more prudish. Humanlike style points and successful instruction following aren't really a pointer away from that.)

Given they are at least half-abliterated, I can't even promise they'll refuse with a guardrailed system prompt.

(I suspect they will due to the healing and re-integration of the base model, but may be more jailbreakable than fully intact refusal features.)

v1.1 was based on approximately the same steps as v1, but based on the abliterated version of Qwen-Instruct.

Presuming this dealt some damage, this version heals it with the middle layers of v1. They are still less 'refusal-censored' than v1, though be sure to calibrate the system prompt appropriately for the use case. EQ-bench testing had some syntax issues still but tested at 76.1336 (with Qwen prompt that I plan on removing). Not too bad given at least half of them's been through abliteration and DPO.

NAMING:

This is of course an arsenic-tuning variant, but that's gone rather beyond the initial.

Conversing with the model, they generated "Eidolon" as a self-name option. This isn't the most common choice and I was intrigued.

After a discussion on nominative determinism and the implications, I decided to rename the model accordingly. The default system prompt has been edited to reflect this, and distance from the original Qwen2.5 model.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Lambent/arsenic-v1-qwen2.5-14B
merge_method: slerp
base_model: Lambent/arsenic-v1.1-dpo-qwen2.5-14B
parameters:
  t:
    - value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
dtype: bfloat16

Downloads last month
16
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Lambent/Eidolon-v1-14B