Text Generation
GGUF
English
axolotl
Generated from Trainer
Mistral
instruct
finetune
chatml
gpt4
synthetic data
science
physics
chemistry
biology
math
Einstein-v6-7B-GGUF / README.md
bartowski's picture
Llamacpp quants
cd65fe6 verified
metadata
license: other
tags:
  - axolotl
  - generated_from_trainer
  - Mistral
  - instruct
  - finetune
  - chatml
  - gpt4
  - synthetic data
  - science
  - physics
  - chemistry
  - biology
  - math
base_model: alpindale/Mistral-7B-v0.2-hf
datasets:
  - allenai/ai2_arc
  - camel-ai/physics
  - camel-ai/chemistry
  - camel-ai/biology
  - camel-ai/math
  - metaeval/reclor
  - openbookqa
  - mandyyyyii/scibench
  - derek-thomas/ScienceQA
  - TIGER-Lab/ScienceEval
  - jondurbin/airoboros-3.2
  - LDJnr/Capybara
  - Cot-Alpaca-GPT4-From-OpenHermes-2.5
  - STEM-AI-mtl/Electrical-engineering
  - knowrohit07/saraswati-stem
  - sablo/oasst2_curated
  - lmsys/lmsys-chat-1m
  - TIGER-Lab/MathInstruct
  - bigbio/med_qa
  - meta-math/MetaMathQA-40K
  - openbookqa
  - piqa
  - metaeval/reclor
  - derek-thomas/ScienceQA
  - scibench
  - sciq
  - Open-Orca/SlimOrca
  - migtissera/Synthia-v1.3
  - TIGER-Lab/ScienceEval
  - allenai/WildChat
  - microsoft/orca-math-word-problems-200k
  - openchat/openchat_sharegpt4_dataset
  - teknium/GPTeacher-General-Instruct
  - m-a-p/CodeFeedback-Filtered-Instruction
  - totally-not-an-llm/EverythingLM-data-V3
  - HuggingFaceH4/no_robots
  - OpenAssistant/oasst_top1_2023-08-25
  - WizardLM/WizardLM_evol_instruct_70k
language:
  - en
quantized_by: bartowski
pipeline_tag: text-generation

Llamacpp Quantizations of Einstein-v6-7B

Using llama.cpp release b2589 for quantization.

Original model: https://huggingface.co/Weyaxi/Einstein-v6-7B

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
Einstein-v6-7B-Q8_0.gguf Q8_0 7.69GB Extremely high quality, generally unneeded but max available quant.
Einstein-v6-7B-Q6_K.gguf Q6_K 5.94GB Very high quality, near perfect, recommended.
Einstein-v6-7B-Q5_K_M.gguf Q5_K_M 5.13GB High quality, recommended.
Einstein-v6-7B-Q5_K_S.gguf Q5_K_S 4.99GB High quality, recommended.
Einstein-v6-7B-Q5_0.gguf Q5_0 4.99GB High quality, older format, generally not recommended.
Einstein-v6-7B-Q4_K_M.gguf Q4_K_M 4.36GB Good quality, uses about 4.83 bits per weight, recommended.
Einstein-v6-7B-Q4_K_S.gguf Q4_K_S 4.14GB Slightly lower quality with small space savings.
Einstein-v6-7B-IQ4_NL.gguf IQ4_NL 4.15GB Decent quality, similar to Q4_K_S, new method of quanting, recommended.
Einstein-v6-7B-IQ4_XS.gguf IQ4_XS 3.94GB Decent quality, new method with similar performance to Q4.
Einstein-v6-7B-Q4_0.gguf Q4_0 4.10GB Decent quality, older format, generally not recommended.
Einstein-v6-7B-Q3_K_L.gguf Q3_K_L 3.82GB Lower quality but usable, good for low RAM availability.
Einstein-v6-7B-Q3_K_M.gguf Q3_K_M 3.51GB Even lower quality.
Einstein-v6-7B-IQ3_M.gguf IQ3_M 3.28GB Medium-low quality, new method with decent performance.
Einstein-v6-7B-IQ3_S.gguf IQ3_S 3.18GB Lower quality, new method with decent performance, recommended over Q3 quants.
Einstein-v6-7B-Q3_K_S.gguf Q3_K_S 3.16GB Low quality, not recommended.
Einstein-v6-7B-Q2_K.gguf Q2_K 2.71GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski