--- base_model: - MrRobotoAI/MrRoboto-ProLong-8b-v2k - MrRobotoAI/Thor-v1.4-8b-DARK-FICTION - MrRobotoAI/MrRoboto-ProLong-8b-v2h library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [MrRobotoAI/MrRoboto-ProLong-8b-v2k](https://huggingface.co/MrRobotoAI/MrRoboto-ProLong-8b-v2k) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Thor-v1.4-8b-DARK-FICTION](https://huggingface.co/MrRobotoAI/Thor-v1.4-8b-DARK-FICTION) * [MrRobotoAI/MrRoboto-ProLong-8b-v2h](https://huggingface.co/MrRobotoAI/MrRoboto-ProLong-8b-v2h) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_linear models: - model: MrRobotoAI/Thor-v1.4-8b-DARK-FICTION parameters: weight: - filter: v_proj value: [0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.25, 0.25] - filter: o_proj value: [0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.25, 0.25] - filter: up_proj value: [0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.25, 0.25] - filter: gate_proj value: [0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.25, 0.25] - filter: down_proj value: [0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.25, 0.25] - value: 1 - model: MrRobotoAI/MrRoboto-ProLong-8b-v2h parameters: weight: - filter: v_proj value: [0.75, 0.75, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.75] - filter: o_proj value: [0.75, 0.75, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.75] - filter: up_proj value: [0.75, 0.75, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.75] - filter: gate_proj value: [0.75, 0.75, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.75] - filter: down_proj value: [0.75, 0.75, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.75] - value: 0 base_model: MrRobotoAI/MrRoboto-ProLong-8b-v2k tokenizer_source: base dtype: bfloat16 ```