prince-canuma commited on
Commit
4e4c52b
1 Parent(s): 94fc4ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -148
README.md CHANGED
@@ -1,169 +1,126 @@
1
  ---
2
  license: apache-2.0
3
- language:
4
- - fr
5
- - it
6
- - de
7
- - es
8
- - en
9
- inference:
10
- parameters:
11
- temperature: 0.5
12
- widget:
13
- - messages:
14
- - role: user
15
- content: What is your favorite condiment?
16
  ---
17
- # Model Card for Mixtral-8x22B-Instruct-v0.1-4bit
18
- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
19
 
20
- Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
21
-
22
- For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
23
-
24
- ## Warning
25
- This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
26
-
27
- ## Instruction format
28
-
29
- This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
30
-
31
- The template used to build a prompt for the Instruct model is defined as follows:
32
- ```
33
- <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
34
- ```
35
- Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
36
-
37
- As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
38
- ```python
39
- def tokenize(text):
40
- return tok.encode(text, add_special_tokens=False)
41
-
42
- [BOS_ID] +
43
- tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
44
- tokenize(BOT_MESSAGE_1) + [EOS_ID] +
45
-
46
- tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
47
- tokenize(BOT_MESSAGE_N) + [EOS_ID]
48
- ```
49
-
50
- In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
51
-
52
- In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
53
 
54
- ## Run the model
55
 
 
56
  ```python
57
- from transformers import AutoModelForCausalLM, AutoTokenizer
58
-
59
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
60
- tokenizer = AutoTokenizer.from_pretrained(model_id)
61
-
62
- model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
63
-
64
- messages = [
65
- {"role": "user", "content": "What is your favourite condiment?"},
66
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
67
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
68
- ]
69
-
70
- inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
71
-
72
- outputs = model.generate(inputs, max_new_tokens=20)
73
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
74
- ```
75
-
76
- By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
77
-
78
- ### In half-precision
79
-
80
- Note `float16` precision only works on GPU devices
81
-
82
- <details>
83
- <summary> Click to expand </summary>
84
-
85
- ```diff
86
- + import torch
87
- from transformers import AutoModelForCausalLM, AutoTokenizer
88
-
89
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
90
- tokenizer = AutoTokenizer.from_pretrained(model_id)
91
-
92
- + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
93
-
94
- messages = [
95
- {"role": "user", "content": "What is your favourite condiment?"},
96
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
97
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
98
- ]
99
-
100
- input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
101
-
102
- outputs = model.generate(input_ids, max_new_tokens=20)
103
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
104
- ```
105
- </details>
106
-
107
- ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
108
-
109
- <details>
110
- <summary> Click to expand </summary>
111
-
112
- ```diff
113
- + import torch
114
- from transformers import AutoModelForCausalLM, AutoTokenizer
115
-
116
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
117
- tokenizer = AutoTokenizer.from_pretrained(model_id)
118
-
119
- + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
120
-
121
- text = "Hello my name is"
122
- messages = [
123
- {"role": "user", "content": "What is your favourite condiment?"},
124
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
125
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
126
- ]
127
-
128
- input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
129
-
130
- outputs = model.generate(input_ids, max_new_tokens=20)
131
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
132
  ```
133
- </details>
134
 
135
- ### Load the model with Flash Attention 2
 
 
136
 
137
- <details>
138
- <summary> Click to expand </summary>
 
 
 
 
 
139
 
140
- ```diff
141
- + import torch
142
- from transformers import AutoModelForCausalLM, AutoTokenizer
143
 
144
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
145
- tokenizer = AutoTokenizer.from_pretrained(model_id)
146
 
147
- + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
 
 
 
 
 
 
 
 
 
 
148
 
149
- messages = [
150
- {"role": "user", "content": "What is your favourite condiment?"},
151
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
152
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
153
- ]
154
 
155
- input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
 
156
 
157
- outputs = model.generate(input_ids, max_new_tokens=20)
158
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
159
  ```
160
- </details>
161
 
162
- ## Limitations
 
 
 
 
 
 
163
 
164
- The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
165
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
166
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
167
 
168
  # The Mistral AI Team
169
- Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
4
 
5
+ # Model Card for Mixtral-8x22B-Instruct-v0.1
6
+ The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
+ Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
9
 
10
+ ## Run the model
11
  ```python
12
+ from transformers import AutoModelForCausalLM
13
+ from mistral_common.protocol.instruct.messages import (
14
+ AssistantMessage,
15
+ UserMessage,
16
+ )
17
+ from mistral_common.protocol.instruct.tool_calls import (
18
+ Tool,
19
+ Function,
20
+ )
21
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
22
+ from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
23
+
24
+ device = "cuda" # the device to load the model onto
25
+
26
+ tokenizer_v3 = MistralTokenizer.v3()
27
+
28
+ mistral_query = ChatCompletionRequest(
29
+ tools=[
30
+ Tool(
31
+ function=Function(
32
+ name="get_current_weather",
33
+ description="Get the current weather",
34
+ parameters={
35
+ "type": "object",
36
+ "properties": {
37
+ "location": {
38
+ "type": "string",
39
+ "description": "The city and state, e.g. San Francisco, CA",
40
+ },
41
+ "format": {
42
+ "type": "string",
43
+ "enum": ["celsius", "fahrenheit"],
44
+ "description": "The temperature unit to use. Infer this from the users location.",
45
+ },
46
+ },
47
+ "required": ["location", "format"],
48
+ },
49
+ )
50
+ )
51
+ ],
52
+ messages=[
53
+ UserMessage(content="What's the weather like today in Paris"),
54
+ ],
55
+ model="test",
56
+ )
57
+
58
+ encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens
59
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1")
60
+ model_inputs = encodeds.to(device)
61
+ model.to(device)
62
+
63
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
64
+ sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer
65
+ decoded = sp_tokenizer.decode(generated_ids[0])
66
+ print(decoded)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ```
 
68
 
69
+ # Instruct tokenizer
70
+ The HuggingFace tokenizer included in this release should match our own. To compare:
71
+ `pip install mistral-common`
72
 
73
+ ```py
74
+ from mistral_common.protocol.instruct.messages import (
75
+ AssistantMessage,
76
+ UserMessage,
77
+ )
78
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
79
+ from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
80
 
81
+ from transformers import AutoTokenizer
 
 
82
 
83
+ tokenizer_v3 = MistralTokenizer.v3()
 
84
 
85
+ mistral_query = ChatCompletionRequest(
86
+ messages=[
87
+ UserMessage(content="How many experts ?"),
88
+ AssistantMessage(content="8"),
89
+ UserMessage(content="How big ?"),
90
+ AssistantMessage(content="22B"),
91
+ UserMessage(content="Noice 🎉 !"),
92
+ ],
93
+ model="test",
94
+ )
95
+ hf_messages = mistral_query.model_dump()['messages']
96
 
97
+ tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens
 
 
 
 
98
 
99
+ tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1')
100
+ tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True)
101
 
102
+ assert tokenized_hf == tokenized_mistral
 
103
  ```
 
104
 
105
+ # Function calling and special tokens
106
+ This tokenizer includes more special tokens, related to function calling :
107
+ - [TOOL_CALLS]
108
+ - [AVAILABLE_TOOLS]
109
+ - [/AVAILABLE_TOOLS]
110
+ - [TOOL_RESULT]
111
+ - [/TOOL_RESULTS]
112
 
113
+ If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](https://github.com/mistralai/mistral-common/blob/main/src/mistral_common/tokens/tokenizers/sentencepiece.py#L299).
 
 
114
 
115
  # The Mistral AI Team
116
+ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,
117
+ Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,
118
+ Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,
119
+ Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,
120
+ Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,
121
+ Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,
122
+ Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,
123
+ Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,
124
+ Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,
125
+ Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,
126
+ Valera Nemychnikova, William El Sayed, William Marshall