--- base_model: mistralai/Mixtral-8x7B-v0.1 inference: false language: - en license: apache-2.0 model-index: - name: Mixtral-8x7B results: [] model_creator: mistralai model_name: Mixtral-8x7B model_type: mixtral prompt_template: | <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant quantized_by: Inferless tags: - mixtral - vllm - GPTQ ---
Inferless

Serverless GPUs to scale your machine learning inference without any hassle of managing servers, deploy complicated and custom models with ease.

Join Private Beta

Go through this tutorial, for quickly deploy Mixtral-8x7B-v0.1 using Inferless


# Mixtral-8x7B - GPTQ - Model creator: [Mistralai](https://huggingface.co/mistralai) - Original model: [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) ## Description This repo contains GPTQ model files for [Mistralai's Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). ### About GPTQ GPTQ is a method that compresses the model size and accelerates inference by quantizing weights based on a calibration dataset, aiming to minimize mean squared error in a single post-quantization step. GPTQ achieves both memory efficiency and faster inference. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Shared files, and GPTQ parameters Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB ## How to use You will need the following software packages and python libraries: ```json build: cuda_version: "12.1.1" system_packages: - "libssl-dev" python_packages: - "torch==2.1.2" - "vllm==0.2.6" - "transformers==4.36.2" - "accelerate==0.25.0" ```