kittn commited on
Commit
09a315e
1 Parent(s): c254b36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -120
README.md CHANGED
@@ -8,131 +8,21 @@ language:
8
  - en
9
  inference: false
10
  ---
11
- # Model Card for Mixtral-8x7B
12
- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
13
 
14
- For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
15
 
16
- ## Warning
17
- This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
18
 
19
- ## Instruction format
20
 
21
- This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
 
22
 
23
- The template used to build a prompt for the Instruct model is defined as follows:
24
- ```
25
- <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
26
- ```
27
- Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
28
 
29
- As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
30
- ```python
31
- def tokenize(text):
32
- return tok.encode(text, add_special_tokens=False)
33
 
34
- [BOS_ID] +
35
- tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
36
- tokenize(BOT_MESSAGE_1) + [EOS_ID] +
37
-
38
- tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
39
- tokenize(BOT_MESSAGE_N) + [EOS_ID]
40
- ```
41
 
42
- In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
43
-
44
- ## Run the model
45
-
46
- ```python
47
- from transformers import AutoModelForCausalLM, AutoTokenizer
48
-
49
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
50
- tokenizer = AutoTokenizer.from_pretrained(model_id)
51
-
52
- model = AutoModelForCausalLM.from_pretrained(model_id)
53
-
54
- text = "Hello my name is"
55
- inputs = tokenizer(text, return_tensors="pt")
56
-
57
- outputs = model.generate(**inputs, max_new_tokens=20)
58
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
59
- ```
60
-
61
- By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
62
-
63
- ### In half-precision
64
-
65
- Note `float16` precision only works on GPU devices
66
-
67
- <details>
68
- <summary> Click to expand </summary>
69
-
70
- ```diff
71
- + import torch
72
- from transformers import AutoModelForCausalLM, AutoTokenizer
73
-
74
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
75
- tokenizer = AutoTokenizer.from_pretrained(model_id)
76
-
77
- + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
78
-
79
- text = "Hello my name is"
80
- + inputs = tokenizer(text, return_tensors="pt").to(0)
81
-
82
- outputs = model.generate(**inputs, max_new_tokens=20)
83
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
84
- ```
85
- </details>
86
-
87
- ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
88
-
89
- <details>
90
- <summary> Click to expand </summary>
91
-
92
- ```diff
93
- + import torch
94
- from transformers import AutoModelForCausalLM, AutoTokenizer
95
-
96
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
97
- tokenizer = AutoTokenizer.from_pretrained(model_id)
98
-
99
- + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
100
-
101
- text = "Hello my name is"
102
- + inputs = tokenizer(text, return_tensors="pt").to(0)
103
-
104
- outputs = model.generate(**inputs, max_new_tokens=20)
105
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
106
- ```
107
- </details>
108
-
109
- ### Load the model with Flash Attention 2
110
-
111
- <details>
112
- <summary> Click to expand </summary>
113
-
114
- ```diff
115
- + import torch
116
- from transformers import AutoModelForCausalLM, AutoTokenizer
117
-
118
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
119
- tokenizer = AutoTokenizer.from_pretrained(model_id)
120
-
121
- + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
122
-
123
- text = "Hello my name is"
124
- + inputs = tokenizer(text, return_tensors="pt").to(0)
125
-
126
- outputs = model.generate(**inputs, max_new_tokens=20)
127
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
128
- ```
129
- </details>
130
-
131
- ## Limitations
132
-
133
- The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
134
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
135
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
136
-
137
- # The Mistral AI Team
138
- Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
 
8
  - en
9
  inference: false
10
  ---
 
 
11
 
12
+ # Mixtral-8x7B (gpt-fast edition)
13
 
14
+ This repo holds quantized Mixtral-8x7B weights to be used in [gpt-fast](https://github.com/pytorch-labs/gpt-fast/).
 
15
 
16
+ ## Compatibility
17
 
18
+ Conversion to int4 was broken, so this repo only holds fp8 weights.
19
+ Practically speaking this means your GPU(s) need to be Ada Lovelace or newer, and have enough VRAM to hold the model + KV cache + activations.
20
 
21
+ I'm hoping it can work on a pair of 4090s, which combined have 48 GiB (51.539607552 GB) of VRAM. Ignoring all overhead, this leaves
22
+ ~4.74 GB for KV-cache and activations, which should be enough (?).
 
 
 
23
 
24
+ - [ ] TODO: Test on 2x4090 with TP=2
 
 
 
25
 
26
+ ## Notes
 
 
 
 
 
 
27
 
28
+ Conversion was done with [(commit 7510a9d)](https://github.com/pytorch-labs/gpt-fast/commit/7510a9df7d23725ae46e9fca7d6ae8ee3a8f448e)