--- license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of Einstein-v5-v0.2-7B Using llama.cpp release b2536 for quantization. Original model: https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Einstein-v5-v0.2-7B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [Einstein-v5-v0.2-7B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [Einstein-v5-v0.2-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. | | [Einstein-v5-v0.2-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. | | [Einstein-v5-v0.2-7B-Q5_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. | | [Einstein-v5-v0.2-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight. | | [Einstein-v5-v0.2-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. | | [Einstein-v5-v0.2-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Decent quality, similar to Q4_K_S, new method of quanting, | | [Einstein-v5-v0.2-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. | | [Einstein-v5-v0.2-7B-Q4_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. | | [Einstein-v5-v0.2-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [Einstein-v5-v0.2-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [Einstein-v5-v0.2-7B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. | | [Einstein-v5-v0.2-7B-IQ3_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [Einstein-v5-v0.2-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [Einstein-v5-v0.2-7B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski