MoMA_llava_7b / README.md
KunpengSong's picture
Update README.md
db6ccb7 verified
|
raw
history blame
834 Bytes
metadata
inference: false


MoMA Model Card

Model details

Model type: MoMA is an open-source image personalization model. It has new attention layers and a multi-modal large language model fine-tuned from LLaVA-7B.

Paper or resources for more information:

Where to send questions or comments about the model: https://github.com/bytedance/MoMA/tree/main

Intended use

Primary intended uses: The primary use is research on personalized image generation tasks.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.