Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,344 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
|
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
|
|
|
|
|
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
|
|
|
|
55 |
|
56 |
-
|
|
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
|
|
|
|
67 |
|
68 |
-
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
[
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- en
|
5 |
+
license: apache-2.0
|
6 |
library_name: transformers
|
7 |
+
tags:
|
8 |
+
- nlp
|
9 |
+
- code
|
10 |
+
- vision
|
11 |
+
- chemistry
|
12 |
+
- engineering
|
13 |
+
- biology
|
14 |
+
- bio-inspired
|
15 |
+
- text-generation-inference
|
16 |
+
- materials science
|
17 |
+
- mixture-of-experts
|
18 |
+
- science
|
19 |
+
- latex
|
20 |
+
datasets:
|
21 |
+
- lamm-mit/Cephalo-Bioinspired-Mechanics-Materials
|
22 |
+
- lamm-mit/Cephalo-Wikipedia-Materials
|
23 |
+
- OleehyO/latex-formulas
|
24 |
+
- lamm-mit/OleehyO-latex-formulas
|
25 |
+
pipeline_tag: image-text-to-text
|
26 |
+
inference:
|
27 |
+
parameters:
|
28 |
+
temperature: 0.3
|
29 |
+
widget:
|
30 |
+
- messages:
|
31 |
+
- role: user
|
32 |
+
content: <|image_1|>Can you describe what you see in the image?
|
33 |
---
|
34 |
+
## Model Summary
|
35 |
|
36 |
+
Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks.
|
37 |
|
38 |
+
A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training.
|
39 |
|
40 |
+
Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries.
|
41 |
|
42 |
+
The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding.
|
43 |
|
44 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png)
|
45 |
|
46 |
+
Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
|
47 |
|
48 |
+
This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on the Idefics-2 model. The basic model architecture is as follows:
|
49 |
|
50 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png)
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
+
### Download Idefics-2 MoE Model and Sample inference code
|
54 |
|
55 |
+
```markdown
|
56 |
+
pip install transformers -U
|
57 |
+
```
|
58 |
|
59 |
+
```python
|
60 |
+
import torch
|
61 |
+
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig
|
62 |
|
63 |
+
def count_parameters(model):
|
64 |
+
total_params = sum(p.numel() for p in model.parameters())
|
65 |
+
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
|
66 |
+
#number of parameters in b
|
67 |
+
return total_params/1e9, trainable_params/1e9
|
68 |
|
69 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
70 |
|
71 |
+
model_name_moe = f"lamm-mit/Cephalo-Idefics2-3x8b-beta"
|
72 |
|
73 |
+
config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True)
|
74 |
|
75 |
+
processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True)
|
76 |
+
moe_model = AutoModelForCausalLM.from_pretrained(
|
77 |
+
model_name_moe,config=config,
|
78 |
+
trust_remote_code=True, torch_dtype=torch.bfloat16,
|
79 |
+
# quantization_config=quantization_config,
|
80 |
+
).to(device)
|
81 |
|
82 |
+
count_parameters(moe_model)
|
83 |
+
```
|
84 |
|
85 |
+
Now use downloaded model for inference:
|
86 |
|
87 |
+
```python
|
88 |
+
from transformers.image_utils import load_image
|
89 |
+
DEVICE='cuda'
|
90 |
+
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")
|
91 |
|
92 |
+
# Create inputs
|
93 |
+
messages = [
|
94 |
+
{
|
95 |
+
"role": "user",
|
96 |
+
"content": [
|
97 |
+
{"type": "image"},
|
98 |
+
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
|
99 |
+
]
|
100 |
+
},
|
101 |
+
]
|
102 |
+
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
|
103 |
|
104 |
+
# Get inputs using the processor
|
105 |
+
inputs = processor(text=prompt, images=[image], return_tensors="pt")
|
106 |
+
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
|
107 |
|
108 |
+
# Generate
|
109 |
+
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
|
110 |
+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
111 |
|
112 |
+
print(generated_texts)
|
113 |
+
```
|
114 |
|
|
|
115 |
|
116 |
+
## Make a Idefics-2-MoE model from scratch using several pre-trained models
|
117 |
|
118 |
+
Download .py files that implement the Phi-3-V and the Mixture-of-Expert Vision model
|
119 |
|
120 |
+
```markdown
|
121 |
+
pip install huggingface_hub
|
122 |
+
```
|
123 |
|
124 |
+
```python
|
125 |
+
from huggingface_hub import HfApi, hf_hub_download
|
126 |
+
from tqdm.notebook import tqdm
|
127 |
+
import os
|
128 |
+
import shutil
|
129 |
|
130 |
+
# Repository details
|
131 |
+
repo_id = "lamm-mit/Cephalo-Idefics2-3x8b-beta"
|
132 |
+
api = HfApi()
|
133 |
|
134 |
+
# List all files in the repository
|
135 |
+
files_in_repo = api.list_repo_files(repo_id)
|
136 |
+
|
137 |
+
# Filter for .py files
|
138 |
+
py_files = [file for file in files_in_repo if file.endswith('.py')]
|
139 |
+
|
140 |
+
# Directory to save the downloaded files
|
141 |
+
save_dir = "./Idefics2_MoE/"
|
142 |
+
os.makedirs(save_dir, exist_ok=True)
|
143 |
+
|
144 |
+
# Download each .py file
|
145 |
+
for file_name in tqdm(py_files):
|
146 |
+
file_path = hf_hub_download(repo_id=repo_id, filename=file_name)
|
147 |
+
new_path = os.path.join(save_dir, file_name)
|
148 |
+
shutil.move(file_path, new_path)
|
149 |
+
print(f"Downloaded: {file_name}")
|
150 |
+
|
151 |
+
print("Download completed.")
|
152 |
+
```
|
153 |
+
|
154 |
+
Download models that will form the experts, as well as the base model:
|
155 |
+
|
156 |
+
```python
|
157 |
+
from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer
|
158 |
+
from transformers import BitsAndBytesConfig
|
159 |
+
|
160 |
+
DEVICE='cuda'
|
161 |
+
|
162 |
+
quantization_config = BitsAndBytesConfig(
|
163 |
+
load_in_4bit=True,
|
164 |
+
bnb_4bit_quant_type="nf4",
|
165 |
+
bnb_4bit_use_double_quant=True,
|
166 |
+
bnb_4bit_compute_dtype=torch.bfloat16
|
167 |
+
)
|
168 |
+
|
169 |
+
model_id_1='lamm-mit/Cephalo-Idefics-2-vision-8b-beta'
|
170 |
+
|
171 |
+
model_1 = Idefics2ForConditionalGeneration.from_pretrained( model_id_1,
|
172 |
+
torch_dtype=torch.bfloat16, #if your GPU allows
|
173 |
+
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
|
174 |
+
trust_remote_code=True,
|
175 |
+
#quantization_config=quantization_config,
|
176 |
+
)#.to (DEVICE)
|
177 |
+
processor = AutoProcessor.from_pretrained(
|
178 |
+
f"{model_id_1}",
|
179 |
+
do_image_splitting=True
|
180 |
+
)
|
181 |
+
|
182 |
+
config = AutoConfig.from_pretrained(model_id_1, trust_remote_code=True)
|
183 |
+
|
184 |
+
IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}"
|
185 |
+
processor.chat_template = IDEFICS2_CHAT_TEMPLATE
|
186 |
+
```
|
187 |
+
|
188 |
+
Now, load the rest of the models:
|
189 |
+
```
|
190 |
+
model_id_2='HuggingFaceM4/idefics2-8b-chatty'
|
191 |
+
|
192 |
+
model_2 = Idefics2ForConditionalGeneration.from_pretrained( model_id_2,
|
193 |
+
torch_dtype=torch.bfloat16, #if your GPU allows
|
194 |
+
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
|
195 |
+
trust_remote_code=True,
|
196 |
+
#quantization_config=quantization_config,
|
197 |
+
)#.to (DEVICE)
|
198 |
+
|
199 |
+
model_id_3='HuggingFaceM4/idefics2-8b'
|
200 |
+
|
201 |
+
model_3 = Idefics2ForConditionalGeneration.from_pretrained( model_id_3,
|
202 |
+
torch_dtype=torch.bfloat16, #if your GPU allows
|
203 |
+
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
|
204 |
+
trust_remote_code=True,
|
205 |
+
#quantization_config=quantization_config,
|
206 |
+
)#.to (DEVICE)
|
207 |
+
```
|
208 |
+
Put on device:
|
209 |
+
```
|
210 |
+
model_1.to(DEVICE)
|
211 |
+
model_2.to(DEVICE)
|
212 |
+
model_3.to(DEVICE)
|
213 |
+
```
|
214 |
+
|
215 |
+
### Construct MoE
|
216 |
+
```
|
217 |
+
dtype = torch.bfloat16 # Desired dtype for new layers
|
218 |
+
base_model = copy.deepcopy(model_1) # Your base model
|
219 |
+
expert_models = [ model_1, model_2, model_3 ] # List of expert models
|
220 |
+
|
221 |
+
moe_config = Idefics2ForCausalLMMoEConfig(config=config, k=1, num_expert_models=len (expert_models))
|
222 |
+
moe_model = Idefics2ForCausalLMMoE(moe_config, base_model, expert_models, layer_dtype = dtype)#.to(device)
|
223 |
+
|
224 |
+
count_parameters(expert_models[0]),count_parameters(moe_model)
|
225 |
+
```
|
226 |
+
Delete models no longer needed:
|
227 |
+
```
|
228 |
+
del model_1
|
229 |
+
del model_2
|
230 |
+
del model_3
|
231 |
+
```
|
232 |
+
Put MoE model on device:
|
233 |
+
```
|
234 |
+
moe_model.to(DEVICE)
|
235 |
+
```
|
236 |
+
Test if it works (untrained):
|
237 |
+
```
|
238 |
+
from transformers.image_utils import load_image
|
239 |
+
|
240 |
+
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")
|
241 |
+
|
242 |
+
# Create inputs
|
243 |
+
messages = [
|
244 |
+
{
|
245 |
+
"role": "user",
|
246 |
+
"content": [
|
247 |
+
{"type": "image"},
|
248 |
+
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
|
249 |
+
]
|
250 |
+
},
|
251 |
+
]
|
252 |
+
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
|
253 |
+
|
254 |
+
# Get inputs using the processor
|
255 |
+
inputs = processor(text=prompt, images=[image], return_tensors="pt")
|
256 |
+
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
|
257 |
+
|
258 |
+
# Generate
|
259 |
+
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
|
260 |
+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
261 |
+
|
262 |
+
print(generated_texts)
|
263 |
+
```
|
264 |
+
|
265 |
+
### Now train MoE gating function
|
266 |
+
```python
|
267 |
+
image_1 = Image.open("./VALIDATION/Q15.jpg")
|
268 |
+
image_1a = Image.open("./VALIDATION/Q31.jpg")
|
269 |
+
|
270 |
+
image_2 = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw)
|
271 |
+
image_2a = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw)
|
272 |
+
|
273 |
+
image_3 = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw)
|
274 |
+
image_3a = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw)
|
275 |
+
|
276 |
+
prompts_per_expert = [
|
277 |
+
[{"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1]},
|
278 |
+
{"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1a]},
|
279 |
+
],
|
280 |
+
|
281 |
+
[{"text": "User:<image>What is shown in this image. <end_of_utterance>Assistant: The image shows a human.", "image": [image_2]},
|
282 |
+
{"text": "User:<image>What is shown in this image, and what does it mean in terms of human history? <end_of_utterance>Assistant: The image shows a historical image of human development.", "image": [image_2a]},
|
283 |
+
],
|
284 |
+
|
285 |
+
[{"text": "User:<image>What is shown in this image. Provide a brief answer. <end_of_utterance>Assistant: This is an apple.", "image": [image_3]},
|
286 |
+
{"text": "User:<image>What is shown in this image. Brief and concise answer. <end_of_utterance>Assistant: The image shows an apple.", "image": [image_3a]},
|
287 |
+
],
|
288 |
+
]
|
289 |
+
|
290 |
+
gating_layer_params = moe_model.train_gating_layer_params_from_hidden_states(processor, prompts_per_expert,
|
291 |
+
epochs=1000, loss_steps=100, lr=5e-5, layer_offset=0)
|
292 |
+
|
293 |
+
# Set parameters for a specific layer
|
294 |
+
moe_model.set_gating_layer_params(gating_layer_params)
|
295 |
+
```
|
296 |
+
|
297 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/mh4eFDuFsTBOYbjc38PYz.png)
|
298 |
+
|
299 |
+
Inference after MoE gating layers are trained:
|
300 |
+
|
301 |
+
```
|
302 |
+
from transformers.image_utils import load_image
|
303 |
+
|
304 |
+
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")
|
305 |
+
|
306 |
+
# Create inputs
|
307 |
+
messages = [
|
308 |
+
{
|
309 |
+
"role": "user",
|
310 |
+
"content": [
|
311 |
+
{"type": "image"},
|
312 |
+
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
|
313 |
+
]
|
314 |
+
},
|
315 |
+
]
|
316 |
+
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
|
317 |
+
|
318 |
+
# Get inputs using the processor
|
319 |
+
inputs = processor(text=prompt, images=[image], return_tensors="pt")
|
320 |
+
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
|
321 |
+
|
322 |
+
# Generate
|
323 |
+
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
|
324 |
+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
325 |
+
|
326 |
+
print(generated_texts)
|
327 |
+
```
|
328 |
+
|
329 |
+
### Push to hub and save locally
|
330 |
+
|
331 |
+
```python
|
332 |
+
repo_id='...'
|
333 |
+
moe_name='Cephalo-Idefics2-3x8b-beta'
|
334 |
+
|
335 |
+
processor.push_to_hub (f'{repo_id}/'+moe_name, )
|
336 |
+
moe_model.push_to_hub (f'{repo_id}/'+merged_name, )
|
337 |
+
```
|
338 |
+
|
339 |
+
Save locally:
|
340 |
+
```
|
341 |
+
processor.save_pretrained(moe_name, )
|
342 |
+
moe_model.save_pretrained(moe_name, )
|
343 |
+
|
344 |
+
```
|