--- tags: - vision-language model - llama - yi - generation datasets: - YanweiLi/MGM-Instruction --- # MGM-34B-HD Model Card ## Model details The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously. You can also try our other MGM series models: Normal resolution setting: [MGM-2B](https://huggingface.co/YanweiLi/MGM-2B), [MGM-7B](https://huggingface.co/YanweiLi/MGM-7B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B), [MGM-8x7B](https://huggingface.co/YanweiLi/MGM-8x7B), [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B) High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-13B-HD](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD) **Model type:** MGM is an open-source chatbot trained by fine-tuning Nous-Hermes-2-Yi-34B on GPT-generated multimodal instruction-following data. It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously. **Model version:** MGM HD Version with LLM Nous-Hermes-2-Yi-34B **Model date:** MGM-34B-HD was trained on 03/2024. ## License Nous-Hermes-2-Yi-34B is licensed under the apache-2.0 License, **Where to send questions or comments about the model:** https://github.com/dvlab-research/MGM/issues ## Intended use **Primary intended uses:** The primary use is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training data This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail. ## Acknowledgement This project is not affiliated with Google LLC.