tags:
- vision-language model
- mixtral
- generation
datasets:
- YanweiLi/MGM-Instruction
MGM-8x7B Model Card
Model details
The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously. You can also try our other MGM series models:
Normal resolution setting: MGM-2B, MGM-7B, MGM-13B, MGM-34B
High resolution setting: MGM-7B-HD, MGM-13B, MGM-8x7B-HD, MGM-34B-HD
Model type: MGM is an open-source chatbot trained by fine-tuning Mixtral-8x7B on GPT-generated multimodal instruction-following data.
It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously.
Model version: MGM with LLM Mixtral-8x7B-Instruct-v0.1
Model date: MGM-8x7B was trained on 03/2024.
License
Mixtral-8x7B is licensed under the apache-2.0 License,
Where to send questions or comments about the model: https://github.com/dvlab-research/MGM/issues
Intended use
Primary intended uses: The primary use is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
Training data
This model is trained based on MGM-Instruction dataset, please to the Github for more detail.
Acknowledgement
This project is not affiliated with Google LLC.