File size: 901 Bytes
fd52036
 
 
 
 
 
 
01d622b
fd52036
 
 
 
01d622b
fd52036
 
9bdd071
210f411
01d622b
2597c63
fd52036
 
db6ccb7
fd52036
 
 
465f3d0
fd52036
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
inference: false
---

<br>
<br>

# MoMA Model Card

## Model details

**Model type:**
MoMA is an open-source image personalization model. It has new attention layers and a multi-modal large language model fine-tuned from LLaVA-7B.

**Paper or resources for more information:**
+ Project page: https://moma-adapter.github.io/
+ Github: https://github.com/bytedance/MoMA/tree/main
+ Paper: https://arxiv.org/abs/2404.05674
+ Online Demo: https://huggingface.co/spaces/yizhezhu/MoMA_zeroGPU

**Where to send questions or comments about the model:**
https://github.com/bytedance/MoMA/tree/main

## Intended use
**Primary intended uses:**
The primary use is research on personalized image generation tasks.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.