LoneStriker commited on
Commit
e6a7e12
1 Parent(s): d62785a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - fr
5
+ - it
6
+ - de
7
+ - es
8
+ - en
9
+ ---
10
+ # Hugging Face Transformers Conversion of Mixtral-8x7B-Instruct
11
+
12
+
13
+ # Model Card for Mixtral-8x7B
14
+ The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
15
+
16
+ For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
17
+
18
+ ## Warning
19
+ This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
20
+
21
+ ## Instruction format
22
+
23
+ This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
24
+
25
+ The template used to build a prompt for the Instruct model is defined as follows:
26
+ ```
27
+ <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
28
+ ```
29
+ Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
30
+
31
+ As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
32
+ ```python
33
+ def tokenize(text):
34
+ return tok.encode(text, add_special_tokens=False)
35
+
36
+ [BOS_ID] +
37
+ tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
38
+ tokenize(BOT_MESSAGE_1) + [EOS_ID] +
39
+
40
+ tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
41
+ tokenize(BOT_MESSAGE_N) + [EOS_ID]
42
+ ```
43
+
44
+ In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
45
+
46
+ ## Run the model
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
52
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
53
+
54
+ model = AutoModelForCausalLM.from_pretrained(model_id)
55
+
56
+ text = "Hello my name is"
57
+ inputs = tokenizer(text, return_tensors="pt")
58
+
59
+ outputs = model.generate(**inputs, max_new_tokens=20)
60
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
61
+ ```
62
+
63
+ By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
64
+
65
+ ### In half-precision
66
+
67
+ Note `float16` precision only works on GPU devices
68
+
69
+ <details>
70
+ <summary> Click to expand </summary>
71
+
72
+ ```diff
73
+ + import torch
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+
76
+ model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
77
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
78
+
79
+ + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
80
+
81
+ text = "Hello my name is"
82
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
83
+
84
+ outputs = model.generate(**inputs, max_new_tokens=20)
85
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
86
+ ```
87
+ </details>
88
+
89
+ ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
90
+
91
+ <details>
92
+ <summary> Click to expand </summary>
93
+
94
+ ```diff
95
+ + import torch
96
+ from transformers import AutoModelForCausalLM, AutoTokenizer
97
+
98
+ model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
99
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
100
+
101
+ + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
102
+
103
+ text = "Hello my name is"
104
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
105
+
106
+ outputs = model.generate(**inputs, max_new_tokens=20)
107
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
108
+ ```
109
+ </details>
110
+
111
+ ### Load the model with Flash Attention 2
112
+
113
+ <details>
114
+ <summary> Click to expand </summary>
115
+
116
+ ```diff
117
+ + import torch
118
+ from transformers import AutoModelForCausalLM, AutoTokenizer
119
+
120
+ model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
121
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
122
+
123
+ + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
124
+
125
+ text = "Hello my name is"
126
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
127
+
128
+ outputs = model.generate(**inputs, max_new_tokens=20)
129
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
130
+ ```
131
+ </details>
132
+
133
+ ## Limitations
134
+
135
+ The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
136
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
137
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
138
+
139
+ # The Mistral AI Team
140
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.