Mistral-7B-v0.2-AWQ / README.md
Suparious's picture
Update README.md
86d0d97 verified
metadata
license: apache-2.0
pipeline_tag: text-generation
base_model: mistral-community/Mistral-7B-v0.2
tags:
  - transformers
  - safetensors
  - mistral
  - 4-bit
  - AWQ
  - text-generation
  - text-generation-inference
  - autotrain_compatible
  - endpoints_compatible
  - chatml
inference: false
quantized_by: Suparious

mistralai/Mistral-7B-v0.2 AWQ

Model Summary

Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

  • 32k context window (vs 8k context in v0.1)
  • Rope-theta = 1e6
  • No Sliding-Window Attention

For full details of this model please read our paper and release blog post.

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

E.g.

text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"

This format is available as a chat template via the apply_chat_template() method.