AdaptLLM commited on
Commit
cd604ea
1 Parent(s): cbf8e58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -19,8 +19,8 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
19
  **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
20
  **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
21
 
22
- <p align='center'>
23
- <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
24
  </p>
25
 
26
  ## How to use
@@ -81,12 +81,14 @@ AdaMLLM
81
  }
82
  ```
83
 
84
- [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
85
  ```bibtex
86
- @article{cheng2024instruction,
87
- title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
88
- author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
89
- journal={arXiv preprint arXiv:2406.14491},
90
- year={2024}
 
 
91
  }
92
  ```
 
19
  **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
20
  **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
21
 
22
+ <p align='left'>
23
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/bRu85CWwP9129bSCRzos2.png" width="1000">
24
  </p>
25
 
26
  ## How to use
 
81
  }
82
  ```
83
 
84
+ [AdaptLLM](https://huggingface.co/papers/2309.09530) (ICLR 2024)
85
  ```bibtex
86
+ @inproceedings{
87
+ adaptllm,
88
+ title={Adapting Large Language Models via Reading Comprehension},
89
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
90
+ booktitle={The Twelfth International Conference on Learning Representations},
91
+ year={2024},
92
+ url={https://openreview.net/forum?id=y886UXPEZ0}
93
  }
94
  ```