|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
tags: |
|
- finetuned |
|
inference: false |
|
base_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
model_creator: Mistral AI_ |
|
model_name: Mistral 7B Instruct v0.2 |
|
model_type: mistral |
|
prompt_template: '<s>[INST] {prompt} [/INST] |
|
' |
|
quantized_by: rahulmanuwas |
|
--- |
|
# Mistral 7B Instruct v0.2 - GGUF |
|
|
|
This is a quantized model for `mistralai/Mistral-7B-Instruct-v0.2`. Two quantization methods were used: |
|
- Q5_K_M: 5-bit, preserves most of the model's performance |
|
- Q4_K_M: 4-bit, smaller footprints, and saves more memory |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). |
|
|
|
This model was quantized in Google Colab. |