Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Mistral-7B-v0.2 - GGUF

Original model description:

license: apache-2.0

Conversion process:

  1. Download original weights from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
  2. Convert with https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py
  3. You may need to copy the tokenizer.model from Mistral-7B-Instruct-v0.2 repo.
Downloads last month
54
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Unable to determine this model's library. Check the docs .