--- base_model: https://huggingface.co/advanced-stack/MistralAI-v0.1-GGUF inference: false license: cc-by-nc-4.0 model_creator: mistral.ai model_name: Mistral v0.1 model_type: llama prompt_template: '{prompt}' quantized_by: iandennismiller pipeline_tag: text-generation tags: - mistral --- # Mistral 7b ## Support for `calm` These models support the [calm]() language model runner. The particular quants selected for this repo are in support of [calm](https://github.com/iandennismiller/calm), which is a language model runner that automatically uses the right prompts, templates, context size, etc. ## From https://mistral.ai > Mistral-7B-v0.1 is a small, yet powerful model adaptable to many use-cases. Mistral 7B is better than Llama 2 13B on all benchmarks, has natural coding abilities, and 8k sequence length. It’s released under Apache 2.0 licence. We made it easy to deploy on any cloud, and of course on your gaming GPU More info: https://mistral.ai/news/announcing-mistral-7b/