Panchovix commited on
Commit
21dbbf2
1 Parent(s): 395be0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +235 -5
README.md CHANGED
@@ -1,5 +1,235 @@
1
- ---
2
- license: other
3
- license_name: other
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: mrl
4
+ license_link: https://mistral.ai/licenses/MRL-0.1.md
5
+ language:
6
+ - en
7
+ - fr
8
+ - de
9
+ - es
10
+ - it
11
+ - pt
12
+ - zh
13
+ - ja
14
+ - ru
15
+ - ko
16
+
17
+ extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
18
+ ---
19
+
20
+ # EXL2 quantization at 3.75BPW of Mistral-Large-Instruct-2407
21
+ Done with latest exllamav2 dev version.
22
+
23
+ # Model Card for Mistral-Large-Instruct-2407
24
+
25
+ Mistral-Large-Instruct-2407 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities.
26
+
27
+ For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-large-2407/).
28
+
29
+ ## Key features
30
+ - **Multi-lingual by design:** Dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch and Polish.
31
+ - **Proficient in coding:** Trained on 80+ coding languages such as Python, Java, C, C++, Javacsript, and Bash. Also trained on more specific languages such as Swift and Fortran.
32
+ - **Agentic-centric:** Best-in-class agentic capabilities with native function calling and JSON outputting.
33
+ - **Advanced Reasoning:** State-of-the-art mathematical and reasoning capabilities.
34
+ - **Mistral Research License:** Allows usage and modification for research and non-commercial usages.
35
+ - **Large Context:** A large 128k context window.
36
+
37
+ ## Metrics
38
+
39
+ ### Base Pretrained Benchmarks
40
+
41
+ | Benchmark | Score |
42
+ | --- | --- |
43
+ | MMLU | 84.0% |
44
+
45
+
46
+ ### Base Pretrained Multilingual Benchmarks (MMLU)
47
+ | Benchmark | Score |
48
+ | --- | --- |
49
+ | French | 82.8% |
50
+ | German | 81.6% |
51
+ | Spanish | 82.7% |
52
+ | Italian | 82.7% |
53
+ | Dutch | 80.7% |
54
+ | Portuguese | 81.6% |
55
+ | Russian | 79.0% |
56
+ | Korean | 60.1% |
57
+ | Japanese | 78.8% |
58
+ | Chinese | 74.8% |
59
+
60
+
61
+ ### Instruction Benchmarks
62
+
63
+ | Benchmark | Score |
64
+ | --- | --- |
65
+ | MT Bench | 8.63 |
66
+ | Wild Bench | 56.3 |
67
+ | Arena Hard| 73.2 |
68
+
69
+ ### Code & Reasoning Benchmarks
70
+ | Benchmark | Score |
71
+ | --- | --- |
72
+ | Human Eval | 92% |
73
+ | Human Eval Plus| 87% |
74
+ | MBPP Base| 80% |
75
+ | MBPP Plus| 69% |
76
+
77
+ ### Math Benchmarks
78
+
79
+ | Benchmark | Score |
80
+ | --- | --- |
81
+ | GSM8K | 93% |
82
+ | Math Instruct (0-shot, no CoT) | 70% |
83
+ | Math Instruct (0-shot, CoT)| 71.5% |
84
+
85
+ ## Usage
86
+
87
+ The model can be used with two different frameworks
88
+
89
+ - [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Large-2407#mistral-inference)
90
+ - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
91
+
92
+ ### Mistral Inference
93
+
94
+ #### Install
95
+
96
+ It is recommended to use `mistralai/Mistral-Large-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
97
+
98
+ ```
99
+ pip install mistral_inference
100
+ ```
101
+
102
+ #### Download
103
+
104
+ ```py
105
+ from huggingface_hub import snapshot_download
106
+ from pathlib import Path
107
+
108
+ mistral_models_path = Path.home().joinpath('mistral_models', 'Large')
109
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
110
+
111
+ snapshot_download(repo_id="mistralai/Mistral-Large-2407", allow_patterns=["params.json", "consolidated-*.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
112
+ ```
113
+
114
+ #### Chat
115
+
116
+ After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
117
+ Given the size of this model, you will need a node with several GPUs (more than 300GB cumulated vRAM).
118
+ If you have 8 GPUs on your machine, you can chat with the model using
119
+
120
+ ```
121
+ torchrun --nproc-per-node 8 --no-python mistral-chat $HOME/mistral_models/Large --instruct --max_tokens 256 --temperature 0.7
122
+ ```
123
+
124
+ *E.g.* Try out something like:
125
+ ```
126
+ How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
127
+ ```
128
+
129
+ #### Instruct following
130
+
131
+ ```py
132
+ from mistral_inference.transformer import Transformer
133
+ from mistral_inference.generate import generate
134
+
135
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
136
+ from mistral_common.protocol.instruct.messages import UserMessage
137
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
138
+
139
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
140
+ model = Transformer.from_folder(mistral_models_path)
141
+
142
+ prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
143
+
144
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
145
+
146
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
147
+
148
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.7, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
149
+ result = tokenizer.decode(out_tokens[0])
150
+
151
+ print(result)
152
+ ```
153
+
154
+ #### Function calling
155
+
156
+ ```py
157
+ from mistral_common.protocol.instruct.tool_calls import Function, Tool
158
+ from mistral_inference.transformer import Transformer
159
+ from mistral_inference.generate import generate
160
+
161
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
162
+ from mistral_common.protocol.instruct.messages import UserMessage
163
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
164
+
165
+
166
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
167
+ model = Transformer.from_folder(mistral_models_path)
168
+
169
+ completion_request = ChatCompletionRequest(
170
+ tools=[
171
+ Tool(
172
+ function=Function(
173
+ name="get_current_weather",
174
+ description="Get the current weather",
175
+ parameters={
176
+ "type": "object",
177
+ "properties": {
178
+ "location": {
179
+ "type": "string",
180
+ "description": "The city and state, e.g. San Francisco, CA",
181
+ },
182
+ "format": {
183
+ "type": "string",
184
+ "enum": ["celsius", "fahrenheit"],
185
+ "description": "The temperature unit to use. Infer this from the users location.",
186
+ },
187
+ },
188
+ "required": ["location", "format"],
189
+ },
190
+ )
191
+ )
192
+ ],
193
+ messages=[
194
+ UserMessage(content="What's the weather like today in Paris?"),
195
+ ],
196
+ )
197
+
198
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
199
+
200
+ out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.7, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
201
+ result = tokenizer.decode(out_tokens[0])
202
+
203
+ print(result)
204
+ ```
205
+
206
+ ### Transformers
207
+
208
+ > [!IMPORTANT]
209
+ > NOTE: Until a new release has been made, you need to install transformers from source:
210
+ > ```sh
211
+ > pip install git+https://github.com/huggingface/transformers.git
212
+ > ```
213
+
214
+ If you want to use Hugging Face `transformers` to generate text, you can do something like this.
215
+
216
+ ```py
217
+ from transformers import pipeline
218
+
219
+ messages = [
220
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
221
+ {"role": "user", "content": "Who are you?"},
222
+ ]
223
+ chatbot = pipeline("text-generation", model="mistralai/Mistral-Large-2407")
224
+ chatbot(messages)
225
+ ```
226
+
227
+ ## Limitations
228
+
229
+ The Mistral Large model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
230
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
231
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
232
+
233
+ ## The Mistral AI Team
234
+
235
+ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Diogo Costa, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall