rahulmanuwas's picture
Update README.md
0a2ad6e verified
metadata
license: apache-2.0
pipeline_tag: text-generation
tags:
  - finetuned
  - text-generation-inference
  - 'mistral '
inference: false
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_creator: Mistral AI_
model_name: Mistral 7B Instruct v0.2
model_type: mistral
prompt_template: '<s>[INST] {prompt} [/INST] '
quantized_by: rahulmanuwas
library_name: adapter-transformers

Mistral 7B Instruct v0.2 - GGUF

This is a quantized model for mistralai/Mistral-7B-Instruct-v0.2. Two quantization methods were used:

  • Q5_K_M: 5-bit, preserves most of the model's performance
  • Q4_K_M: 4-bit, smaller footprints, and saves more memory

Description

This repo contains GGUF format model files for Mistral AI_'s Mistral 7B Instruct v0.2.

This model was quantized in Google Colab.