Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BruinsV2-OpHermesNeu-11B - GGUF - Model creator: https://huggingface.co/Ba2han/ - Original model: https://huggingface.co/Ba2han/BruinsV2-OpHermesNeu-11B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BruinsV2-OpHermesNeu-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q2_K.gguf) | Q2_K | 3.73GB | | [BruinsV2-OpHermesNeu-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [BruinsV2-OpHermesNeu-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.IQ3_S.gguf) | IQ3_S | 4.37GB | | [BruinsV2-OpHermesNeu-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [BruinsV2-OpHermesNeu-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.IQ3_M.gguf) | IQ3_M | 4.51GB | | [BruinsV2-OpHermesNeu-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q3_K.gguf) | Q3_K | 4.84GB | | [BruinsV2-OpHermesNeu-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [BruinsV2-OpHermesNeu-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [BruinsV2-OpHermesNeu-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [BruinsV2-OpHermesNeu-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q4_0.gguf) | Q4_0 | 5.66GB | | [BruinsV2-OpHermesNeu-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [BruinsV2-OpHermesNeu-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [BruinsV2-OpHermesNeu-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q4_K.gguf) | Q4_K | 6.02GB | | [BruinsV2-OpHermesNeu-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [BruinsV2-OpHermesNeu-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q4_1.gguf) | Q4_1 | 6.27GB | | [BruinsV2-OpHermesNeu-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q5_0.gguf) | Q5_0 | 6.89GB | | [BruinsV2-OpHermesNeu-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [BruinsV2-OpHermesNeu-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q5_K.gguf) | Q5_K | 7.08GB | | [BruinsV2-OpHermesNeu-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [BruinsV2-OpHermesNeu-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q5_1.gguf) | Q5_1 | 7.51GB | | [BruinsV2-OpHermesNeu-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q6_K.gguf) | Q6_K | 8.2GB | | [BruinsV2-OpHermesNeu-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Ba2han_-_BruinsV2-OpHermesNeu-11B-gguf/blob/main/BruinsV2-OpHermesNeu-11B.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- license: mit --- | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.6527|± |0.0139| | | |acc_norm|0.6869|± |0.0136| **Warning! This model may or may not be contaminated [See discussion 474](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474). What a shame. It still does perform well though** A passthrough merge of OpenHermes-2.5-neural-chat-7b-v3-1 and Bruins-V2. To be updated. Template: ChatML My settings: Temperature: 0.7-0.8 Min_p: 0.12 Top_K: 0 Repetition Penalty: 1.16 Mirostat Tau: 2.5-3 Mirostat Eta: 0.12 Personal Thoughts: - The model sometimes throws wrong tags, you can add those to "Custom stopping strings" in Oobabooga. - Output with Mirostat consistently felt smarter than a set Top_K rate. Note: The model is hallucinating hard in chat mode for me in some instances, like writing adblocker messages. Kind of funny. I am not sure which dataset involved was poisoned.