--- license: apache-2.0 language: - en base_model: - Lin-Chen/open-llava-next-llama3-8b tags: - food - recipe --- # Adapting Multimodal Large Language Models to Domains via Post-Training This repo contains the **food MLLM developed from LLaVA-NeXT-Llama3-8B** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md) We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.** **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
## How to use ```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch from PIL import Image import requests # Define your input image and instruction here: ## image url = "https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/bRu85CWwP9129bSCRzos2.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") instruction = "What's in the image?" model_path='AdaptLLM/food-LLaVA-NeXT-Llama3-8B' # =========================== Do NOT need to modify the following =============================== # Load the processor processor = LlavaNextProcessor.from_pretrained(model_path) # Define image token image_token = "<|reserved_special_token_4|>" # Format the prompt prompt = ( f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n" f"You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." f"<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n" f"{image_token}\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" ) # Load the model model = LlavaNextForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto") # Prepare inputs and generate output inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device) answer_start = int(inputs["input_ids"].shape[-1]) output = model.generate(**inputs, max_new_tokens=512) # Decode predictions pred = processor.decode(output[0][answer_start:], skip_special_tokens=True) print(pred) ``` ## Citation If you find our work helpful, please cite us. AdaMLLM ```bibtex @article{adamllm, title={On Domain-Specific Post-Training for Multimodal Large Language Models}, author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang}, journal={arXiv preprint arXiv:2411.19930}, year={2024} } ``` [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024) ```bibtex @article{cheng2024instruction, title={Instruction Pre-Training: Language Models are Supervised Multitask Learners}, author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu}, journal={arXiv preprint arXiv:2406.14491}, year={2024} } ```