Edit model card


MoMA Model Card

Model details

Model type: MoMA is an open-source image personalization model. It has new attention layers and a multi-modal large language model fine-tuned from LLaVA-7B.

Paper or resources for more information:

Where to send questions or comments about the model: https://github.com/bytedance/MoMA/tree/main

Intended use

Primary intended uses: The primary use is research on personalized image generation tasks.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Downloads last month
350
Inference Examples
Inference API (serverless) has been turned off for this model.

Spaces using KunpengSong/MoMA_llava_7b 2