--- base_model: - djuna/L3-Suze-Vume - Orenguteng/Llama-3.1-8B-Lexi-Uncensored library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored) as a base. ### Models Merged The following models were included in the merge: * [djuna/L3-Suze-Vume](https://huggingface.co/djuna/L3-Suze-Vume) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_linear models: - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored parameters: weight: - filter: v_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: o_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: up_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: gate_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - filter: down_proj value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] - value: 1 - model: djuna/L3-Suze-Vume parameters: weight: - filter: v_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: o_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: up_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: gate_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - filter: down_proj value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - value: 0 base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored tokenizer_source: base dtype: float32 out_dtype: bfloat16 ```