RichardErkhov commited on
Commit
afbaf47
1 Parent(s): b46fc82

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Mixtral-8x7B-v0.1 - bnb 4bits
11
+ - Model creator: https://huggingface.co/mistralai/
12
+ - Original model: https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ language:
21
+ - fr
22
+ - it
23
+ - de
24
+ - es
25
+ - en
26
+ tags:
27
+ - moe
28
+ ---
29
+ # Model Card for Mixtral-8x7B
30
+ The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
31
+
32
+ For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
33
+
34
+ ## Warning
35
+ This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
36
+
37
+ ## Run the model
38
+
39
+
40
+ ```python
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer
42
+
43
+ model_id = "mistralai/Mixtral-8x7B-v0.1"
44
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
45
+
46
+ model = AutoModelForCausalLM.from_pretrained(model_id)
47
+
48
+ text = "Hello my name is"
49
+ inputs = tokenizer(text, return_tensors="pt")
50
+
51
+ outputs = model.generate(**inputs, max_new_tokens=20)
52
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
53
+ ```
54
+
55
+ By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
56
+
57
+ ### In half-precision
58
+
59
+ Note `float16` precision only works on GPU devices
60
+
61
+ <details>
62
+ <summary> Click to expand </summary>
63
+
64
+ ```diff
65
+ + import torch
66
+ from transformers import AutoModelForCausalLM, AutoTokenizer
67
+
68
+ model_id = "mistralai/Mixtral-8x7B-v0.1"
69
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
70
+
71
+ + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
72
+
73
+ text = "Hello my name is"
74
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
75
+
76
+ outputs = model.generate(**inputs, max_new_tokens=20)
77
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
78
+ ```
79
+ </details>
80
+
81
+ ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
82
+
83
+ <details>
84
+ <summary> Click to expand </summary>
85
+
86
+ ```diff
87
+ + import torch
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+
90
+ model_id = "mistralai/Mixtral-8x7B-v0.1"
91
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
92
+
93
+ + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
94
+
95
+ text = "Hello my name is"
96
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
97
+
98
+ outputs = model.generate(**inputs, max_new_tokens=20)
99
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
100
+ ```
101
+ </details>
102
+
103
+ ### Load the model with Flash Attention 2
104
+
105
+ <details>
106
+ <summary> Click to expand </summary>
107
+
108
+ ```diff
109
+ + import torch
110
+ from transformers import AutoModelForCausalLM, AutoTokenizer
111
+
112
+ model_id = "mistralai/Mixtral-8x7B-v0.1"
113
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
114
+
115
+ + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
116
+
117
+ text = "Hello my name is"
118
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
119
+
120
+ outputs = model.generate(**inputs, max_new_tokens=20)
121
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
122
+ ```
123
+ </details>
124
+
125
+ ## Notice
126
+ Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
127
+
128
+ # The Mistral AI Team
129
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
130
+