File size: 15,332 Bytes
be3fb98
e1261ba
 
 
 
be3fb98
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be3fb98
e1261ba
be3fb98
e1261ba
be3fb98
e1261ba
be3fb98
e1261ba
be3fb98
e1261ba
be3fb98
e2e3896
be3fb98
e1261ba
be3fb98
e2e3896
be3fb98
e1261ba
be3fb98
e2e3896
e1261ba
 
be3fb98
e1261ba
 
 
be3fb98
e1261ba
 
 
 
 
be3fb98
e1261ba
be3fb98
e1261ba
 
 
 
 
 
 
 
be3fb98
e1261ba
 
be3fb98
e1261ba
be3fb98
e1261ba
 
 
 
be3fb98
e1261ba
 
 
 
 
 
 
 
 
 
 
be3fb98
e1261ba
 
 
be3fb98
e1261ba
 
 
be3fb98
e1261ba
 
9cc25f1
be3fb98
9cc25f1
4b46e98
 
 
9cc25f1
be3fb98
e1261ba
be3fb98
e1261ba
be3fb98
e2e3896
e1261ba
 
be3fb98
e1261ba
 
 
 
 
be3fb98
e1261ba
 
 
be3fb98
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2d3129
 
 
 
 
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2d3129
e1261ba
 
 
 
 
 
 
 
 
 
 
 
e2e3896
e1261ba
 
 
 
 
 
 
d2d3129
e1261ba
 
 
 
 
 
 
 
d2d3129
e1261ba
 
e2e3896
e1261ba
 
 
 
 
 
e2e3896
 
 
 
e1261ba
 
e2e3896
e1261ba
 
e2e3896
e1261ba
 
 
 
e2e3896
e1261ba
 
 
 
 
e2e3896
e1261ba
 
e2e3896
 
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2e3896
 
 
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2e3896
 
 
e1261ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2e3896
 
e1261ba
 
 
 
 
 
 
 
 
e2e3896
e1261ba
 
 
e2e3896
 
 
 
 
 
 
 
 
 
 
 
 
 
e1261ba
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
---
language:
- multilingual
- en
license: apache-2.0
library_name: transformers
tags:
- nlp
- code
- vision
- chemistry
- engineering
- biology
- bio-inspired
- text-generation-inference
- materials science
- mixture-of-experts
- science
- latex
datasets:
- lamm-mit/Cephalo-Bioinspired-Mechanics-Materials
- lamm-mit/Cephalo-Wikipedia-Materials
pipeline_tag: image-text-to-text
inference:
  parameters:
    temperature: 0.3
widget:
- messages:
  - role: user
    content: <|image_1|>Can you describe what you see in the image?
---
## Model Summary

Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. 

The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png)

Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.

This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on variants and fine-tuned versions of the Idefics-2 model. The basic model architecture is as follows:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png)

The model has 20b parameters (3 experts, each 8b each, 8b active parameters during inference).

### Download Idefics-2 MoE Model and Sample inference code

```python
pip install transformers -U
```

```python
import torch
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig  

def count_parameters(model):
    total_params = sum(p.numel() for p in model.parameters())
    trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
    #number of parameters in b
    return total_params/1e9, trainable_params/1e9

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model_name_moe = f"lamm-mit/Cephalo-Idefics2-3x8b-beta"
config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True) 
moe_model = AutoModelForCausalLM.from_pretrained(
    model_name_moe,config=config,
    trust_remote_code=True,  torch_dtype=torch.bfloat16,   
   # quantization_config=quantization_config,
).to(device)

count_parameters(moe_model)
```

Now use downloaded model for inference:

```python
from transformers.image_utils import load_image
DEVICE='cuda'
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")

# Create inputs
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
        ]
    },
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)

# Get inputs using the processor
inputs = processor(text=prompt, images=[image], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

# Generate
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)
```
Output:

<pre style="white-space: pre-wrap;">
The image shows a group of ants climbing over a vertical surface. The ants are using their legs and antennae to navigate the surface, demonstrating their ability to adapt to different environments and overcome obstacles. This behavior is relevant for materials design because it highlights the ants' ability to optimize their movements and interactions with their surroundings, which can inspire the development of advanced materials that mimic these natural adaptations.
  
Multi-agent AI refers to the use of artificial intelligence algorithms to simulate and analyze the behavior of multiple agents, such as ants, in a system. This approach allows for the study of complex interactions and emergent properties that arise from the collective actions of individual agents. By understanding how ants navigate and interact with their environment, researchers can gain insights into the design of materials that exhibit similar properties, such as self-healing, adaptive behavior, and enhanced functionality.
</pre>

## Make a Idefics-2-MoE model from scratch using several pre-trained models

Download .py files that implement the Phi-3-V and the Mixture-of-Expert Vision model 

```python
pip install huggingface_hub
```

```python
from huggingface_hub import HfApi, hf_hub_download
from tqdm.notebook import tqdm
import os
import shutil

# Repository details
repo_id = "lamm-mit/Cephalo-Idefics2-3x8b-beta"
api = HfApi()

# List all files in the repository
files_in_repo = api.list_repo_files(repo_id)

# Filter for .py files
py_files = [file for file in files_in_repo if file.endswith('.py')]

# Directory to save the downloaded files
save_dir = "./Idefics2_MoE/"
os.makedirs(save_dir, exist_ok=True)

# Download each .py file
for file_name in tqdm(py_files):
    file_path = hf_hub_download(repo_id=repo_id, filename=file_name)
    new_path = os.path.join(save_dir, file_name)
    shutil.move(file_path, new_path)
    print(f"Downloaded: {file_name}")

print("Download completed.")
```

Download models that will form the experts, as well as the base model. As a simple example, we use 

1) Materials-science fine-tuned model: lamm-mit/Cephalo-Idefics-2-vision-8b-beta (model_1)
2) A chatty version: HuggingFaceM4/idefics2-8b-chatty (model_1) (model_2)
3) A basic variant: HuggingFaceM4/idefics2-8b (model_3)

```python
from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer
from transformers import BitsAndBytesConfig

DEVICE='cuda'

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16
)

model_id_1='lamm-mit/Cephalo-Idefics-2-vision-8b-beta'

model_1 = Idefics2ForConditionalGeneration.from_pretrained(  model_id_1,
                                                           torch_dtype=torch.bfloat16, #if your GPU allows
                                                           _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
                                                           trust_remote_code=True,
                                                           #quantization_config=quantization_config,
                                                          ) 
processor = AutoProcessor.from_pretrained(
    f"{model_id_1}",
    do_image_splitting=True
)

config =  AutoConfig.from_pretrained(model_id_1, trust_remote_code=True)

IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}"
processor.chat_template = IDEFICS2_CHAT_TEMPLATE
```

Now, load the rest of the models:
```python
model_id_2='HuggingFaceM4/idefics2-8b-chatty'

model_2 = Idefics2ForConditionalGeneration.from_pretrained(  model_id_2,
                                                           torch_dtype=torch.bfloat16, #if your GPU allows
                                                           _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
                                                           trust_remote_code=True,
                                                           #quantization_config=quantization_config,
                                                          ) 

model_id_3='HuggingFaceM4/idefics2-8b'

model_3 = Idefics2ForConditionalGeneration.from_pretrained(  model_id_3,
                                                           torch_dtype=torch.bfloat16, #if your GPU allows
                                                           _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
                                                           trust_remote_code=True,
                                                           #quantization_config=quantization_config,
                                                          ) 
```
Put on device:
```python
model_1.to(DEVICE)
model_2.to(DEVICE)
model_3.to(DEVICE)
```

### Construct MoE 

Here we show how a MoE is constructed from the set of expert models loaded earlier. We consider three models, model_1, model_2 and model_3. 

```python
dtype = torch.bfloat16  # Desired dtype for new layers
base_model = copy.deepcopy(model_1)  # Your base model
expert_models = [ model_1,  model_2,  model_3 ]  # List of expert models

moe_config = Idefics2ForCausalLMMoEConfig(config=config, k=1, num_expert_models=len (expert_models))
moe_model = Idefics2ForCausalLMMoE(moe_config, base_model, expert_models,  layer_dtype = dtype) 

count_parameters(expert_models[0]),count_parameters(moe_model)
```
Delete models no longer needed:
```python
del model_1
del model_2
del model_3 
```
Put MoE model on device:
```python
moe_model.to(DEVICE)
```
Test if it works (untrained, may not produce desirable putput since gating layers have not been trained):
```python
from transformers.image_utils import load_image

image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")

# Create inputs
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
        ]
    },
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)

# Get inputs using the processor
inputs = processor(text=prompt, images=[image], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

# Generate
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)
```

### Now train MoE gating function

We train the gating layers by providing sample images/prompts for each of the three experts. Here is a simple example training set: 

```python
image_1 = Image.open("./VALIDATION/Q15.jpg") 
image_1a = Image.open("./VALIDATION/Q31.jpg") 

image_2 = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) 
image_2a = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) 

image_3 = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) 
image_3a = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) 

prompts_per_expert = [
    [{"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1]}, 
     {"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1a]}, 
     ],

    [{"text": "User:<image>What is shown in this image. <end_of_utterance>Assistant: The image shows a human.", "image": [image_2]}, 
     {"text": "User:<image>What is shown in this image, and what does it mean in terms of human history? <end_of_utterance>Assistant: The image shows a historical image of human development.", "image": [image_2a]}, 
     ],
    
     [{"text": "User:<image>What is shown in this image. Provide a brief answer. <end_of_utterance>Assistant: This is an apple.", "image": [image_3]}, 
     {"text": "User:<image>What is shown in this image. Brief and concise answer. <end_of_utterance>Assistant: The image shows an apple.", "image": [image_3a]}, 
     ],
]

gating_layer_params = moe_model.train_gating_layer_params_from_hidden_states(processor, prompts_per_expert,
                                              epochs=1000, loss_steps=100,  lr=5e-5, layer_offset=0)

# Set parameters for a specific layer  
moe_model.set_gating_layer_params(gating_layer_params)
```

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/mh4eFDuFsTBOYbjc38PYz.png)


Now that the MoE model has been trained, we can try inference.  Inference after MoE gating layers are trained:

```python
from transformers.image_utils import load_image

image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")

# Create inputs
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
        ]
    },
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)

# Get inputs using the processor
inputs = processor(text=prompt, images=[image], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

# Generate
generated_ids = moe_model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)
```

### Push to hub and save locally

We can save the MoE model either in Hugging Face Hub or locally:

```python
repo_id='...'
moe_name='Cephalo-Idefics2-3x8b-beta'

processor.push_to_hub (f'{repo_id}/'+moe_name, )
moe_model.push_to_hub (f'{repo_id}/'+merged_name, )
```

Save locally:
```python
processor.save_pretrained(moe_name, )
moe_model.save_pretrained(moe_name,  )

```

Loading the model works as done above. Here included again for completeness:
```python
model_name_moe = f'{repo_id}/'+moe_name
config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True) 
moe_model = AutoModelForCausalLM.from_pretrained(
    model_name_moe,config=config,
    trust_remote_code=True,  torch_dtype=torch.bfloat16,   
   # quantization_config=quantization_config,
).to(device)

count_parameters(moe_model)
```