Models and dataset of Safe-CLIP: https://arxiv.org/abs/2311.16254

AImageLab
university
AI & ML interests
None defined yet.
Collections
2
LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1
-
aimagelab/LLaVA_MORE-llama_3_1-8B-pretrain
Image-Text-to-Text • Updated • 28 -
aimagelab/LLaVA_MORE-llama_3_1-8B-finetuning
Image-Text-to-Text • Updated • 1.88k • 8 -
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-pretrain
Image-Text-to-Text • Updated • 10 -
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-finetuning
Image-Text-to-Text • Updated • 122 • 1
models
14

aimagelab/CoDE
Image Feature Extraction
•
Updated
•
1.51k
•
2

aimagelab/ReflectiVA
Image-Text-to-Text
•
Updated
•
19
•
2

aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning
Image-Text-to-Text
•
Updated
•
16
•
2

aimagelab/LLaVA_MORE-llama_3_1-8B-S2-finetuning
Image-Text-to-Text
•
Updated
•
8

aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-finetuning
Image-Text-to-Text
•
Updated
•
122
•
1

aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain
Image-Text-to-Text
•
Updated
•
17

aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-pretrain
Image-Text-to-Text
•
Updated
•
10

aimagelab/LLaVA_MORE-llama_3_1-8B-S2-pretrain
Image-Text-to-Text
•
Updated
•
15

aimagelab/LLaVA_MORE-llama_3_1-8B-pretrain
Image-Text-to-Text
•
Updated
•
28

aimagelab/LLaVA_MORE-llama_3_1-8B-finetuning
Image-Text-to-Text
•
Updated
•
1.88k
•
8