--- base_model: [] library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # Dolph-Lund -Wizard-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cf23cffbd0cc580bc65c73/mAkwMM8uhVLVnFS-K9R_v.png) ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest as a base. ### Models Merged The following models were included in the merge: * /Users/etherops1/AI/Not-WizardLM-2-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties parameters: int8_mask: true t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors embed_slerp: true models: - model: /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest # No parameters necessary for base model - model: /Users/etherops1/AI/Not-WizardLM-2-7B parameters: density: 0.58 weight: 0.4 base_model: /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest tokenizer_source: model:/Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest dtype: bfloat16 ```