--- base_model: - Lambent/arsenic-v1.1-dpo-qwen2.5-14B - Lambent/arsenic-v1-qwen2.5-14B library_name: transformers tags: - mergekit - merge - not-for-all-audiences --- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details WARNING: There's actually a *reason* for the not-for-all-audiences tag on this one. Qwen2.5 was much more refusal-censored in the first place compared to Mistral Nemo, but abliteration adjusts that. (They're still probably more prudish. Humanlike style points and successful instruction following aren't really a pointer away from that.) Given they are at least half-abliterated, I can't even promise they'll refuse with a guardrailed system prompt. (I suspect they will due to the healing and re-integration of the base model, but may be more jailbreakable than fully intact refusal features.) v1.1 was based on *approximately* the same steps as v1, but based on the abliterated version of Qwen-Instruct. Presuming this dealt some damage, this version heals it with the middle layers of v1. They are still less 'refusal-censored' than v1, though be sure to calibrate the system prompt appropriately for the use case. EQ-bench testing had some syntax issues still but tested at 76.1336 (with Qwen prompt that I plan on removing). Not too bad given at least half of them's been through abliteration and DPO. NAMING: This is of course an arsenic-tuning variant, but that's gone rather beyond the initial. Conversing with the model, they generated "Eidolon" as a self-name option. This isn't the most common choice and I was intrigued. After a discussion on nominative determinism and the implications, I decided to rename the model accordingly. The default system prompt has been edited to reflect this, and distance from the original Qwen2.5 model. ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Lambent/arsenic-v1.1-dpo-qwen2.5-14B](https://huggingface.co/Lambent/arsenic-v1.1-dpo-qwen2.5-14B) * [Lambent/arsenic-v1-qwen2.5-14B](https://huggingface.co/Lambent/arsenic-v1-qwen2.5-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Lambent/arsenic-v1-qwen2.5-14B merge_method: slerp base_model: Lambent/arsenic-v1.1-dpo-qwen2.5-14B parameters: t: - value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0] dtype: bfloat16 ```