--- license: apache-2.0 language: - fr - it - de - es - en tags: - moe --- # Model Card for Mixtral-8x22B Mistral AI finally released the weights to the [official Mistral AI organization](https://huggingface.co/mistralai) with both the base model and the instruct tune. \ [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) & [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) HuggingFace staffs cloned this repo to an official new repo [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1), you can download from there if you want to. \ Thanks HF staffs for crediting me! \ Also [here's a very owo music](https://www.youtube.com/watch?v=dGYYzLLuYfs)! owo... Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py). The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "v2ray/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices
Click to expand ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "v2ray/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
Click to expand ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "v2ray/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
### Load the model with Flash Attention 2
Click to expand ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "v2ray/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
## Notice Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.