Squashing commit
Browse filesCo-authored-by: mishig <mishig@users.noreply.huggingface.co>
Co-authored-by: v2ray <v2ray@users.noreply.huggingface.co>
This view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +35 -0
- README.md +103 -0
- RELEASE +71 -0
- config.json +30 -0
- convert.py +278 -0
- generation_config.json +6 -0
- model-00001-of-00059.safetensors +3 -0
- model-00002-of-00059.safetensors +3 -0
- model-00003-of-00059.safetensors +3 -0
- model-00004-of-00059.safetensors +3 -0
- model-00005-of-00059.safetensors +3 -0
- model-00006-of-00059.safetensors +3 -0
- model-00007-of-00059.safetensors +3 -0
- model-00008-of-00059.safetensors +3 -0
- model-00009-of-00059.safetensors +3 -0
- model-00010-of-00059.safetensors +3 -0
- model-00011-of-00059.safetensors +3 -0
- model-00012-of-00059.safetensors +3 -0
- model-00013-of-00059.safetensors +3 -0
- model-00014-of-00059.safetensors +3 -0
- model-00015-of-00059.safetensors +3 -0
- model-00016-of-00059.safetensors +3 -0
- model-00017-of-00059.safetensors +3 -0
- model-00018-of-00059.safetensors +3 -0
- model-00019-of-00059.safetensors +3 -0
- model-00020-of-00059.safetensors +3 -0
- model-00021-of-00059.safetensors +3 -0
- model-00022-of-00059.safetensors +3 -0
- model-00023-of-00059.safetensors +3 -0
- model-00024-of-00059.safetensors +3 -0
- model-00025-of-00059.safetensors +3 -0
- model-00026-of-00059.safetensors +3 -0
- model-00027-of-00059.safetensors +3 -0
- model-00028-of-00059.safetensors +3 -0
- model-00029-of-00059.safetensors +3 -0
- model-00030-of-00059.safetensors +3 -0
- model-00031-of-00059.safetensors +3 -0
- model-00032-of-00059.safetensors +3 -0
- model-00033-of-00059.safetensors +3 -0
- model-00034-of-00059.safetensors +3 -0
- model-00035-of-00059.safetensors +3 -0
- model-00036-of-00059.safetensors +3 -0
- model-00037-of-00059.safetensors +3 -0
- model-00038-of-00059.safetensors +3 -0
- model-00039-of-00059.safetensors +3 -0
- model-00040-of-00059.safetensors +3 -0
- model-00041-of-00059.safetensors +3 -0
- model-00042-of-00059.safetensors +3 -0
- model-00043-of-00059.safetensors +3 -0
- model-00044-of-00059.safetensors +3 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- fr
|
5 |
+
- it
|
6 |
+
- de
|
7 |
+
- es
|
8 |
+
- en
|
9 |
+
tags:
|
10 |
+
- moe
|
11 |
+
---
|
12 |
+
# Mixtral-8x22B
|
13 |
+
|
14 |
+
> [!TIP]
|
15 |
+
> Kudos to [@v2ray](https://huggingface.co/v2ray) for converting the checkpoints and uploading them in `transformers` compatible format. Go give them a follow!
|
16 |
+
|
17 |
+
Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py).
|
18 |
+
|
19 |
+
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
|
20 |
+
## Run the model
|
21 |
+
```python
|
22 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
23 |
+
|
24 |
+
model_id = "mistral-community/Mixtral-8x22B-v0.1"
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
26 |
+
|
27 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
28 |
+
|
29 |
+
text = "Hello my name is"
|
30 |
+
inputs = tokenizer(text, return_tensors="pt")
|
31 |
+
|
32 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
33 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
34 |
+
```
|
35 |
+
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
|
36 |
+
### In half-precision
|
37 |
+
Note `float16` precision only works on GPU devices
|
38 |
+
<details>
|
39 |
+
<summary> Click to expand </summary>
|
40 |
+
|
41 |
+
```diff
|
42 |
+
+ import torch
|
43 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
44 |
+
|
45 |
+
model_id = "mistral-community/Mixtral-8x22B-v0.1"
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
47 |
+
|
48 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
|
49 |
+
|
50 |
+
text = "Hello my name is"
|
51 |
+
+ inputs = tokenizer(text, return_tensors="pt").to(0)
|
52 |
+
|
53 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
54 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
55 |
+
```
|
56 |
+
</details>
|
57 |
+
|
58 |
+
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
|
59 |
+
<details>
|
60 |
+
<summary> Click to expand </summary>
|
61 |
+
|
62 |
+
```diff
|
63 |
+
+ import torch
|
64 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
65 |
+
|
66 |
+
model_id = "mistral-community/Mixtral-8x22B-v0.1"
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
68 |
+
|
69 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
|
70 |
+
|
71 |
+
text = "Hello my name is"
|
72 |
+
+ inputs = tokenizer(text, return_tensors="pt").to(0)
|
73 |
+
|
74 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
75 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
76 |
+
```
|
77 |
+
</details>
|
78 |
+
|
79 |
+
### Load the model with Flash Attention 2
|
80 |
+
<details>
|
81 |
+
<summary> Click to expand </summary>
|
82 |
+
|
83 |
+
```diff
|
84 |
+
+ import torch
|
85 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
86 |
+
|
87 |
+
model_id = "mistral-community/Mixtral-8x22B-v0.1"
|
88 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
89 |
+
|
90 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
|
91 |
+
|
92 |
+
text = "Hello my name is"
|
93 |
+
+ inputs = tokenizer(text, return_tensors="pt").to(0)
|
94 |
+
|
95 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
96 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
97 |
+
```
|
98 |
+
</details>
|
99 |
+
|
100 |
+
## Notice
|
101 |
+
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
|
102 |
+
# The Mistral AI Team
|
103 |
+
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.
|
RELEASE
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
▄▄▄░░
|
3 |
+
▄▄▄▄▄█████████░░░░
|
4 |
+
▄▄▄▄▄▄████████████████████░░░░░
|
5 |
+
█████████████████████████████░░░░░
|
6 |
+
▄▄▄▄▄▄█████░░░ █████████████████████████████░░░░░
|
7 |
+
▄▄▄▄▄██████████████████░░░░░░ ██████████████████████████████░░░░░
|
8 |
+
▄█████████████████████████████░░░░░░░░██████████████████████████████░░░░░
|
9 |
+
███████████████████████████████░░░░░░░██████████████████████████████░░░░░
|
10 |
+
███████████████████████████████░░░░░░░██████████████████████████████░░░░░
|
11 |
+
███████████████████████████████░░░░░░███████████████████████████████░░░░░
|
12 |
+
████████████████████████████████░░░░░███████████████████████████████░░░░░
|
13 |
+
████████████████████████████████░░░░████████████████████████████████░░░░░
|
14 |
+
█████████████████████████████████░░░████████████████████████████████░░░░░
|
15 |
+
█████████████████████████████████░░░████████████░███████████████████░░░░░
|
16 |
+
██████████████████████████████████░█████████████░███████████████████░░░░░
|
17 |
+
███████████████████░██████████████▄█████████████░███████████████████░░░░░
|
18 |
+
███████████████████░███████████████████████████░░███████████████████░░░░░
|
19 |
+
███████████████████░░██████████████████████████░░███████████████████░░░░░
|
20 |
+
███████████████████░░█████████████████████████░░░███████████████████░░░░░
|
21 |
+
███████████████████░░░████████████████████████░░░███████████████████░░░░░
|
22 |
+
███████████████████░░░████████████████████████░░░███████████████████░░░░░
|
23 |
+
███████████████████░░░░██████████████████████░░░░███████████████████░░░░░
|
24 |
+
███████████████████░░░░██████████████████████░░░░███████████████████░░░░░
|
25 |
+
███████████████████░░░░░█████████████████████░░░░███████████████████░░░░░
|
26 |
+
███████████████████░░░░░██████████████████��█░░░░░███████████████████░░░░░
|
27 |
+
███████████████████░░░░░░███████████████████░░░░░███████████████████░░░░░
|
28 |
+
███████████████████░░░░░░██████████████████░░░░░░███████████████████░░░░░
|
29 |
+
███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░
|
30 |
+
███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░
|
31 |
+
███████████████████░░░░░░░░███████████████░░░░░░░██████████░░░░░░░░░░░░░░
|
32 |
+
███████████████████░░░░░░░░███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|
33 |
+
███████████████████░░░░░░░░███████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|
34 |
+
███████████████████░░░░░░░░░██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|
35 |
+
███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|
36 |
+
██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░
|
37 |
+
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░
|
38 |
+
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░░
|
39 |
+
░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|
40 |
+
░░░░░░░░░░░░░░░░░
|
41 |
+
░░░░░
|
42 |
+
|
43 |
+
|
44 |
+
╓────────────────────────────────────────────────────────────────────────────╖
|
45 |
+
║ MIXTRAL 8x22B ·· 24/04/10 ║
|
46 |
+
╙────────────────────────────────────────────────────────────────────────────╜
|
47 |
+
|
48 |
+
╓────────────────────────────────────────────────────────────────────────────╖
|
49 |
+
║ ║
|
50 |
+
║ ·· md5sum ·· ║
|
51 |
+
║ ║
|
52 |
+
║ 3816cd2c4f827b4b868bc6481d5d3ba2 consolidated.safetensors ║
|
53 |
+
║ 37974873eb68a7ab30c4912fc36264ae tokenizer.model ║
|
54 |
+
╙────────────────────────────────────────────────────────────────────────────╜
|
55 |
+
|
56 |
+
╓────────────────────────────────────────────────────────────────────────────╖
|
57 |
+
║ ║
|
58 |
+
║ ·· Released by the Mistral AI team ·· ║
|
59 |
+
║ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, ║
|
60 |
+
║ Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, ║
|
61 |
+
║ Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, ║
|
62 |
+
║ Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, ║
|
63 |
+
║ Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, ║
|
64 |
+
║ Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon ║
|
65 |
+
║ Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, ║
|
66 |
+
║ Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, ║
|
67 |
+
║ Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, ║
|
68 |
+
║ Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, ║
|
69 |
+
║ Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall ║
|
70 |
+
║ ║
|
71 |
+
╙────────────────────────────────────────────────────────────────────────────╜
|
config.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"MixtralForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_dropout": 0.0,
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 6144,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 16384,
|
12 |
+
"max_position_embeddings": 65536,
|
13 |
+
"model_type": "mixtral",
|
14 |
+
"num_attention_heads": 48,
|
15 |
+
"num_experts_per_tok": 2,
|
16 |
+
"num_hidden_layers": 56,
|
17 |
+
"num_key_value_heads": 8,
|
18 |
+
"num_local_experts": 8,
|
19 |
+
"output_router_logits": false,
|
20 |
+
"rms_norm_eps": 1e-05,
|
21 |
+
"rope_theta": 1000000,
|
22 |
+
"router_aux_loss_coef": 0.001,
|
23 |
+
"router_jitter_noise": 0.0,
|
24 |
+
"sliding_window": null,
|
25 |
+
"tie_word_embeddings": false,
|
26 |
+
"torch_dtype": "bfloat16",
|
27 |
+
"transformers_version": "4.40.0.dev0",
|
28 |
+
"use_cache": true,
|
29 |
+
"vocab_size": 32000
|
30 |
+
}
|
convert.py
ADDED
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright 2023 Mistral AI and The HuggingFace Inc. team. All rights reserved.
|
2 |
+
#
|
3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
+
# you may not use this file except in compliance with the License.
|
5 |
+
# You may obtain a copy of the License at
|
6 |
+
#
|
7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
+
#
|
9 |
+
# Unless required by applicable law or agreed to in writing, software
|
10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
+
# See the License for the specific language governing permissions and
|
13 |
+
# limitations under the License.
|
14 |
+
import argparse
|
15 |
+
import json
|
16 |
+
import os
|
17 |
+
|
18 |
+
import torch
|
19 |
+
from safetensors.torch import load_file
|
20 |
+
|
21 |
+
from transformers import (
|
22 |
+
MixtralConfig,
|
23 |
+
MixtralForCausalLM,
|
24 |
+
)
|
25 |
+
|
26 |
+
"""
|
27 |
+
Sample usage:
|
28 |
+
|
29 |
+
```
|
30 |
+
python src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py \
|
31 |
+
--input_dir /path/to/downloaded/mixtral/weights --model_size 7B --output_dir /output/path
|
32 |
+
```
|
33 |
+
|
34 |
+
Thereafter, models can be loaded via:
|
35 |
+
|
36 |
+
```py
|
37 |
+
from transformers import MixtralForCausalLM
|
38 |
+
|
39 |
+
model = MixtralForCausalLM.from_pretrained("/output/path")
|
40 |
+
```
|
41 |
+
|
42 |
+
Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
|
43 |
+
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
|
44 |
+
"""
|
45 |
+
|
46 |
+
def compute_intermediate_size(n, ffn_dim_multiplier=1, multiple_of=256):
|
47 |
+
return multiple_of * ((int(ffn_dim_multiplier * int(8 * n / 3)) + multiple_of - 1) // multiple_of)
|
48 |
+
|
49 |
+
def read_json(path):
|
50 |
+
with open(path, "r") as f:
|
51 |
+
return json.load(f)
|
52 |
+
|
53 |
+
def write_json(text, path):
|
54 |
+
with open(path, "w") as f:
|
55 |
+
json.dump(text, f)
|
56 |
+
|
57 |
+
def write_model(model_path, input_base_path, model_size, safe_serialization=True):
|
58 |
+
os.makedirs(model_path, exist_ok=True)
|
59 |
+
|
60 |
+
params = read_json(os.path.join(input_base_path, "params.json"))
|
61 |
+
num_shards = 1
|
62 |
+
|
63 |
+
# For some reason this is a string in the params.json
|
64 |
+
sliding_window = int(params["sliding_window"]) if "sliding_window" in params else None
|
65 |
+
base = params.get("rope_theta", 10000.0)
|
66 |
+
vocab_size = params["vocab_size"]
|
67 |
+
|
68 |
+
if model_size == "7B":
|
69 |
+
dim = params["hidden_size"]
|
70 |
+
max_position_embeddings = 4096 * 8
|
71 |
+
num_local_experts = params["num_local_experts"]
|
72 |
+
ffn_dim = params["intermediate_size"]
|
73 |
+
n_layers = params["num_hidden_layers"]
|
74 |
+
n_heads = params["num_attention_heads"]
|
75 |
+
n_heads_per_shard = n_heads // num_shards
|
76 |
+
dims_per_head = dim // n_heads
|
77 |
+
if "num_key_value_heads" in params:
|
78 |
+
num_key_value_heads = params["num_key_value_heads"] # for GQA / MQA
|
79 |
+
num_local_key_value_heads = num_key_value_heads // num_shards
|
80 |
+
key_value_dim = dims_per_head * num_local_key_value_heads
|
81 |
+
else: # compatibility with other checkpoints
|
82 |
+
num_key_value_heads = n_heads
|
83 |
+
num_local_key_value_heads = n_heads_per_shard
|
84 |
+
key_value_dim = dim
|
85 |
+
rms_norm_eps = params["rms_norm_eps"]
|
86 |
+
elif model_size == "22B":
|
87 |
+
dim = params["dim"]
|
88 |
+
max_position_embeddings = params["max_seq_len"]
|
89 |
+
num_local_experts = params["moe"]["num_experts"]
|
90 |
+
ffn_dim = params["hidden_dim"]
|
91 |
+
n_layers = params["n_layers"]
|
92 |
+
n_heads = params["n_heads"]
|
93 |
+
n_heads_per_shard = n_heads // num_shards
|
94 |
+
dims_per_head = dim // n_heads
|
95 |
+
if "n_kv_heads" in params:
|
96 |
+
num_key_value_heads = params["n_kv_heads"] # for GQA / MQA
|
97 |
+
num_local_key_value_heads = num_key_value_heads // num_shards
|
98 |
+
key_value_dim = dims_per_head * num_local_key_value_heads
|
99 |
+
else: # compatibility with other checkpoints
|
100 |
+
num_key_value_heads = n_heads
|
101 |
+
num_local_key_value_heads = n_heads_per_shard
|
102 |
+
key_value_dim = dim
|
103 |
+
rms_norm_eps = params["norm_eps"]
|
104 |
+
else:
|
105 |
+
raise Exception("Illegal model size:", model_size)
|
106 |
+
|
107 |
+
# permute for sliced rotary
|
108 |
+
def permute(w, n_heads=n_heads, dim1=dim, dim2=dim):
|
109 |
+
return w.view(n_heads, dim1 // n_heads // 2, 2, dim2).transpose(1, 2).reshape(dim1, dim2)
|
110 |
+
|
111 |
+
print(f"Fetching all parameters from the checkpoint at \"{input_base_path}\"...")
|
112 |
+
# Load weights
|
113 |
+
if model_size == "7B":
|
114 |
+
loaded = [
|
115 |
+
torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pt"), map_location="cpu") for i in range(8)
|
116 |
+
]
|
117 |
+
merged_state_dict = {}
|
118 |
+
for state_dict in loaded:
|
119 |
+
merged_state_dict.update(state_dict)
|
120 |
+
elif model_size == "22B":
|
121 |
+
merged_state_dict = load_file(os.path.join(input_base_path, "consolidated.safetensors"))
|
122 |
+
print("Parameters load finished.")
|
123 |
+
|
124 |
+
state_dict = {}
|
125 |
+
for layer_i in range(n_layers):
|
126 |
+
print(f"At layer {layer_i}...")
|
127 |
+
# Sharded
|
128 |
+
# Note that attention.w{q,k,v,o}, feed_fordward.w[1,2,3], attention_norm.weight and ffn_norm.weight share
|
129 |
+
# the same storage object, saving attention_norm and ffn_norm will save other weights too, which is
|
130 |
+
# redundant as other weights will be stitched from multiple shards. To avoid that, they are cloned.
|
131 |
+
|
132 |
+
state_dict.update(
|
133 |
+
{
|
134 |
+
f"model.layers.{layer_i}.input_layernorm.weight": merged_state_dict[
|
135 |
+
f"layers.{layer_i}.attention_norm.weight"
|
136 |
+
].clone(),
|
137 |
+
f"model.layers.{layer_i}.post_attention_layernorm.weight": merged_state_dict[
|
138 |
+
f"layers.{layer_i}.ffn_norm.weight"
|
139 |
+
].clone(),
|
140 |
+
}
|
141 |
+
)
|
142 |
+
|
143 |
+
state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute(
|
144 |
+
merged_state_dict[f"layers.{layer_i}.attention.wq.weight"]
|
145 |
+
.view(n_heads_per_shard, dims_per_head, dim)
|
146 |
+
.reshape(dim, dim)
|
147 |
+
)
|
148 |
+
state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute(
|
149 |
+
merged_state_dict[f"layers.{layer_i}.attention.wk.weight"]
|
150 |
+
.view(num_local_key_value_heads, dims_per_head, dim)
|
151 |
+
.reshape(key_value_dim, dim),
|
152 |
+
num_key_value_heads,
|
153 |
+
key_value_dim,
|
154 |
+
dim,
|
155 |
+
)
|
156 |
+
state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = (
|
157 |
+
merged_state_dict[f"layers.{layer_i}.attention.wv.weight"]
|
158 |
+
.view(num_local_key_value_heads, dims_per_head, dim)
|
159 |
+
.reshape(key_value_dim, dim)
|
160 |
+
)
|
161 |
+
|
162 |
+
state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = merged_state_dict[
|
163 |
+
f"layers.{layer_i}.attention.wo.weight"
|
164 |
+
]
|
165 |
+
|
166 |
+
if model_size == "7B":
|
167 |
+
w1 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w1"]
|
168 |
+
w2 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w2"]
|
169 |
+
w3 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w3"]
|
170 |
+
|
171 |
+
experts_w1 = [
|
172 |
+
w1[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
|
173 |
+
for expert_idx in range(num_local_experts)
|
174 |
+
]
|
175 |
+
|
176 |
+
for idx, expert_block in enumerate(experts_w1):
|
177 |
+
expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w1"
|
178 |
+
state_dict[expert_key + ".weight"] = expert_block.clone()
|
179 |
+
|
180 |
+
experts_w2 = [
|
181 |
+
w2[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
|
182 |
+
for expert_idx in range(num_local_experts)
|
183 |
+
]
|
184 |
+
|
185 |
+
for idx, expert_block in enumerate(experts_w2):
|
186 |
+
expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w2"
|
187 |
+
state_dict[expert_key + ".weight"] = expert_block.T.clone().contiguous()
|
188 |
+
|
189 |
+
experts_w3 = [
|
190 |
+
w3[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
|
191 |
+
for expert_idx in range(num_local_experts)
|
192 |
+
]
|
193 |
+
|
194 |
+
for idx, expert_block in enumerate(experts_w3):
|
195 |
+
expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w3"
|
196 |
+
state_dict[expert_key + ".weight"] = expert_block.clone()
|
197 |
+
|
198 |
+
state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
|
199 |
+
f"layers.{layer_i}.block_sparse_moe.gate.weight"
|
200 |
+
]
|
201 |
+
elif model_size == "22B":
|
202 |
+
for expert_i in range(num_local_experts):
|
203 |
+
w1 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w1.weight"]
|
204 |
+
w2 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w2.weight"]
|
205 |
+
w3 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w3.weight"]
|
206 |
+
state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w1.weight"] = w1.contiguous().clone()
|
207 |
+
state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w2.weight"] = w2.contiguous().clone()
|
208 |
+
state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w3.weight"] = w3.contiguous().clone()
|
209 |
+
state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
|
210 |
+
f"layers.{layer_i}.feed_forward.gate.weight"
|
211 |
+
]
|
212 |
+
|
213 |
+
state_dict.update(
|
214 |
+
{
|
215 |
+
"model.norm.weight": merged_state_dict["norm.weight"],
|
216 |
+
"model.embed_tokens.weight": merged_state_dict["tok_embeddings.weight"],
|
217 |
+
"lm_head.weight": merged_state_dict["output.weight"],
|
218 |
+
}
|
219 |
+
)
|
220 |
+
|
221 |
+
config_additional_kwargs = {}
|
222 |
+
if model_size == "22B":
|
223 |
+
config_additional_kwargs["num_experts_per_tok"] = params["moe"]["num_experts_per_tok"]
|
224 |
+
config = MixtralConfig(
|
225 |
+
hidden_size=dim,
|
226 |
+
intermediate_size=ffn_dim,
|
227 |
+
num_attention_heads=n_heads,
|
228 |
+
num_hidden_layers=n_layers,
|
229 |
+
rms_norm_eps=rms_norm_eps,
|
230 |
+
num_key_value_heads=num_key_value_heads,
|
231 |
+
vocab_size=vocab_size,
|
232 |
+
rope_theta=base,
|
233 |
+
max_position_embeddings=max_position_embeddings,
|
234 |
+
sliding_window=sliding_window,
|
235 |
+
num_local_experts=num_local_experts,
|
236 |
+
**config_additional_kwargs
|
237 |
+
)
|
238 |
+
|
239 |
+
print("Loading the checkpoint in a Mixtral model.")
|
240 |
+
with torch.device("meta"):
|
241 |
+
model = MixtralForCausalLM(config)
|
242 |
+
# Avoid saving this as part of the config.
|
243 |
+
del model.config._name_or_path
|
244 |
+
model.config.torch_dtype = torch.bfloat16
|
245 |
+
print("Saving in the Transformers format.")
|
246 |
+
|
247 |
+
model.load_state_dict(state_dict, strict=True, assign=True)
|
248 |
+
|
249 |
+
for n, p in model.named_parameters():
|
250 |
+
assert p.device.type != "meta", f"{n} has not been loaded!"
|
251 |
+
|
252 |
+
model.save_pretrained(model_path, safe_serialization=safe_serialization)
|
253 |
+
|
254 |
+
def main():
|
255 |
+
parser = argparse.ArgumentParser()
|
256 |
+
parser.add_argument(
|
257 |
+
"--input-dir",
|
258 |
+
help="Location of Mixtral weights, which contains tokenizer.model and model folders",
|
259 |
+
required=True,
|
260 |
+
)
|
261 |
+
parser.add_argument(
|
262 |
+
"--model-size",
|
263 |
+
choices=["7B", "22B"],
|
264 |
+
help="'f' models correspond to the finetuned versions, and are specific to the Mixtral official release. For more details on Mixtral, checkout the original repo: https://huggingface.co/mistral-ai",
|
265 |
+
default="7B",
|
266 |
+
)
|
267 |
+
parser.add_argument("--output-dir", help="Location to write HF model", required=True)
|
268 |
+
parser.add_argument("--safe-serialization", type=bool, default=True, help="Whether or not to save using `safetensors`.")
|
269 |
+
args = parser.parse_args()
|
270 |
+
write_model(
|
271 |
+
model_path=args.output_dir,
|
272 |
+
input_base_path=args.input_dir,
|
273 |
+
model_size=args.model_size,
|
274 |
+
safe_serialization=args.safe_serialization,
|
275 |
+
)
|
276 |
+
|
277 |
+
if __name__ == "__main__":
|
278 |
+
main()
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.40.0.dev0"
|
6 |
+
}
|
model-00001-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:111cce2f7d94e90d2c4d82061ab9e91fbc4355b4217fb3065ae01ca4fa7ba523
|
3 |
+
size 4998663696
|
model-00002-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d836890a32de1597279722d666d45104eacb3a114b42fdaf01bcaf4ada0dbf1d
|
3 |
+
size 4806799120
|
model-00003-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a124cfa523ffdf6c02882f44fc9419ca891f8a8c6170787123afa913a9c663ec
|
3 |
+
size 4806799120
|
model-00004-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:636c86b0bf1ba940a7f32e4f2b3e36be2f50ef3221efa45df2b01f4e72bc4585
|
3 |
+
size 4806799120
|
model-00005-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48d38897fd5726ae25a489fb888f76da4862880154ad2ed37abc004442971695
|
3 |
+
size 4806799120
|
model-00006-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8024f99a01c6cc52b5e3cab678e8e18f6077f1df4ec750afdc78018539a5894
|
3 |
+
size 4806799120
|
model-00007-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:284a34e315a91308b9e84394dd027f21e18eb35cae27c8949ba3ef24853bc5ea
|
3 |
+
size 4806799120
|
model-00008-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17032a806898cb9615e080bab05bda3e5e863932e057b279c775f912e4ed1605
|
3 |
+
size 4806799120
|
model-00009-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dcc84e3e525705fab85f8eaefe4ce2e69a6399fb63b76b6fcfc5fe55e6eb3bf3
|
3 |
+
size 4806799120
|
model-00010-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5337768358d785771187c8dc6dcea00ea70391170d866cd477dbbb92f165b97
|
3 |
+
size 4806799120
|
model-00011-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7acb43109103dfc4a1021b43e898130e652e70898018b49aeaaf25f035a15841
|
3 |
+
size 4806799136
|
model-00012-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:875c9a5523173e0d2ae2ed7abde3a95b021cac1cffecf5d400d505aadc061032
|
3 |
+
size 4806799152
|
model-00013-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:67f2ad618e91929141c9299adc8f29d9934b812d426f08ddf85e692a9833c77f
|
3 |
+
size 4806799152
|
model-00014-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:428dfaae06c3e5739c807d0e0f1bd7bdbfff41f2dfcd4e4e695adfce92050070
|
3 |
+
size 4806799152
|
model-00015-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8db4f2e385471db237e22933355e6e0bcc7f46500eee174e6d0edda9f240a181
|
3 |
+
size 4806799152
|
model-00016-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0628246a502f5143460c4a3bc7d7e094c60c22f6b7f3fa8302ee21f2854a5445
|
3 |
+
size 4806799152
|
model-00017-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c3af34bcf0c64214978ac0c69c04e0afc2027b9d1ffa9729abac3316d16a83c7
|
3 |
+
size 4806799152
|
model-00018-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eab83aec0b55f27a91c2251cf678de1505a7cc4c7420e65a6a7820c3358e54f7
|
3 |
+
size 4806799152
|
model-00019-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fd4eb8fff582c8d5197245725feebd76647c8c6a7d691d9970ce2e22a0bbe2b5
|
3 |
+
size 4806799152
|
model-00020-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f092555ec809dc5e6fa6a1989d8d95b30f73ef7e83f12956a7d9f532385a0292
|
3 |
+
size 4806799152
|
model-00021-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c7e6c4bb1d9b546123ba2819d4c47dd1866c79c13ba643929c80c9c4b1dc8faf
|
3 |
+
size 4806799152
|
model-00022-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7cd6a28d9bfc93d6b83f77e11e3340d615a0cfc1760fc4ec7a1b4bb09847b028
|
3 |
+
size 4806799152
|
model-00023-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:25817204fb3a1c10276c0d8c6f3503c79388348ecb458740804352b75075d3cd
|
3 |
+
size 4806799152
|
model-00024-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1a425a959d59723dde676a9121819788dd5f8d7aaae0e93bab9381ba8789729a
|
3 |
+
size 4932529864
|
model-00025-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d8692ea80b7668a38dad83fc4be863b933f17da2b7ee0e12167761271f437634
|
3 |
+
size 4995542848
|
model-00026-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4147e8cd43de8b2541c9655c6a846cacfea355f8f84fc74dfd4f1e5b27bb708a
|
3 |
+
size 4995542848
|
model-00027-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43e5a174e86eb6759569426b59e6fa2e191f6d79a605b6de98fd1d0d51b2f447
|
3 |
+
size 4932628288
|
model-00028-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:571581317bdc6188143311fc98f348ff1742da5d3caf9e5fb4666bce7641ebf7
|
3 |
+
size 4806774344
|
model-00029-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7081a4b7a8dbe4b2153ab945673bd64e4dbcb3e8ab342fa54cbe4d406d3c694a
|
3 |
+
size 4806799144
|
model-00030-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bad6d9a1d4e0ce94e1c37cc0b3a11c27e19cd20cb70e6ec7c58c2dbec48e0da5
|
3 |
+
size 4806799144
|
model-00031-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0baf3678b2019ce9f3c469166902a7cb5c4d2c9cf0d74d9aac2711b405e942c9
|
3 |
+
size 4806799144
|
model-00032-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d2185b34b4f0e966b915cc779797984a8b12c7f88010ef767c8e9e9cf0c79565
|
3 |
+
size 4806799144
|
model-00033-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3848b3acfbf4f6d384d07233348a7b68100fc928eb6a0f514bfd0131cf2ebf44
|
3 |
+
size 4806799152
|
model-00034-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa829c75a954b291c0de5fc2244727ada42f692d038d19ef8c890424635d1c6e
|
3 |
+
size 4806799152
|
model-00035-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:faa68a0f1f0272c470853f643b934fadebeb7f71ed9b80746bfc824f7fe75e49
|
3 |
+
size 4806799152
|
model-00036-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3e0a006f69da252b0d760bcb79de478182d73b2ea93f2afcd57d98d52ba8b0d
|
3 |
+
size 4806799152
|
model-00037-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ef1a0d8420239ab78812b3d14fc99c1a99d0f364a0ea419dac7e187ae001b9a
|
3 |
+
size 4806799152
|
model-00038-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:82f18563cccd71b7d5c23b4774f1beb335de28e2260442f9032f9ad29e5674e4
|
3 |
+
size 4806799152
|
model-00039-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:427ca0a9f81e2a9af1465d6c7b8d6ebaf82d3b2637dd91cd8c1f61eaf204e7dd
|
3 |
+
size 4806799152
|
model-00040-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c341cd9aad7b90118afa75c3b3336754d957de5a4a55190799049f1dd70be512
|
3 |
+
size 4806799152
|
model-00041-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:877dd428fba3d3b10d5f688626b21a356fdf23518f4be5e95c1aa596d70b2d46
|
3 |
+
size 4806799152
|
model-00042-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33005e107da051dc59660e3aa4d29fb236e783849adf73049b2aecb59e9fefdb
|
3 |
+
size 4806799152
|
model-00043-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6e8bdcf31f9f1925fb5673cfa9c40f07b59c3073b02f6c06c8ece6d27dc7f24c
|
3 |
+
size 4806799152
|
model-00044-of-00059.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:68f0e70c784a929c513fdd8827bf9379a1f0919ce44d60ffcd573ccf81c2ef83
|
3 |
+
size 4806799152
|