Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Spicy-Laymonade-7B - GGUF - Model creator: https://huggingface.co/ABX-AI/ - Original model: https://huggingface.co/ABX-AI/Spicy-Laymonade-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Spicy-Laymonade-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Spicy-Laymonade-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Spicy-Laymonade-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Spicy-Laymonade-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Spicy-Laymonade-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Spicy-Laymonade-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Spicy-Laymonade-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Spicy-Laymonade-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Spicy-Laymonade-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Spicy-Laymonade-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Spicy-Laymonade-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Spicy-Laymonade-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Spicy-Laymonade-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Spicy-Laymonade-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Spicy-Laymonade-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Spicy-Laymonade-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Spicy-Laymonade-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Spicy-Laymonade-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Spicy-Laymonade-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Spicy-Laymonade-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Spicy-Laymonade-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Spicy-Laymonade-7B-gguf/blob/main/Spicy-Laymonade-7B.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- base_model: - cgato/TheSpice-7b-v0.1.1 - ABX-AI/Laymonade-7B library_name: transformers tags: - mergekit - merge - not-for-all-audiences license: other --- GGUF: https://huggingface.co/ABX-AI/Spicy-Laymonade-7B-GGUF-IQ-Imatrix ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/bMW7mRqBS_xQJBXn-szWS.png) # Spicy-Laymonade-7B Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B. However, I did try it out, and it seemed to work pretty well. ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1) * [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: cgato/TheSpice-7b-v0.1.1 layer_range: [0, 32] - model: ABX-AI/Laymonade-7B layer_range: [0, 32] merge_method: slerp base_model: ABX-AI/Laymonade-7B parameters: t: - filter: self_attn value: [0.7, 0.3, 0.6, 0.2, 0.5] - filter: mlp value: [0.3, 0.7, 0.4, 0.8, 0.5] - value: 0.5 dtype: bfloat16 ```