--- tags: - vision-language model - gemma - generation datasets: - YanweiLi/MGM-Instruction --- # MGM-2B Model Card ## Model details The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously. You can also try our other MGM series models: Normal resolution setting: [MGM-7B](https://huggingface.co/YanweiLi/MGM-7B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B), [MGM-8x7B](https://huggingface.co/YanweiLi/MGM-8x7B), [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B) High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD), [MGM-34B-HD](https://huggingface.co/YanweiLi/MGM-34B-HD) **Model type:** MGM is an open-source chatbot trained by fine-tuning Gemma on GPT-generated multimodal instruction-following data. It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously. **Model version:** MGM with LLM Gemma-2B-it **Model date:** MGM-2B was trained on 03/2024. ## License Gemma is licensed under the Gemma Terms of Use License, **Where to send questions or comments about the model:** https://github.com/dvlab-research/MGM/issues ## Intended use **Primary intended uses:** The primary use is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training data This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail. ## Acknowledgement This project is not affiliated with Google LLC.