--- base_model: - 152334H/miqu-1-70b-sf - lizpreciatior/lzlv_70b_fp16_hf language: - en - de - fr - es - it library_name: transformers tags: - mergekit - merge --- # miquliz-120b ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/RFEW_K0ABp3k_N3j02Ki4.jpeg) - EXL2: [2.4bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.4bpw-h6-exl2) | 2.65bpw | [2.9bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.9bpw-h6-exl2) | [4.0bpw](https://huggingface.co/LoneStriker/miquliz-120b-4.0bpw-h6-exl2) - GGUF: [IQ3_XXS](https://huggingface.co/wolfram/miquliz-120b-GGUF) | [Q4_K_S+Q4_K_M](https://huggingface.co/NanoByte/miquliz-120b-Q4-GGUF) - HF: [wolfram/miquliz-120b](https://huggingface.co/wolfram/miquliz-120b) This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit). Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b). Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub. Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)! ## Prompt template: Mistral ``` [INST] {prompt} [/INST] ``` See also: [πŸΊπŸ¦β€β¬› LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/) ## Model Details - Max Context: 32768 tokens - Layers: 137 ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: - [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) - [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 16] model: 152334H/miqu-1-70b-sf - sources: - layer_range: [8, 24] model: lizpreciatior/lzlv_70b_fp16_hf - sources: - layer_range: [17, 32] model: 152334H/miqu-1-70b-sf - sources: - layer_range: [25, 40] model: lizpreciatior/lzlv_70b_fp16_hf - sources: - layer_range: [33, 48] model: 152334H/miqu-1-70b-sf - sources: - layer_range: [41, 56] model: lizpreciatior/lzlv_70b_fp16_hf - sources: - layer_range: [49, 64] model: 152334H/miqu-1-70b-sf - sources: - layer_range: [57, 72] model: lizpreciatior/lzlv_70b_fp16_hf - sources: - layer_range: [65, 80] model: 152334H/miqu-1-70b-sf ``` ## Credits & Special Thanks - 1st model: - original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai) - leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) - f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) - 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) - mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit) - mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b) ### Support - [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it! #### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.