Coupling specific tasks with LM-attn poses potential risks of LLM degradation.

#2
by JosephusCheung - opened

After testing the Vision part from this repo with the Qwen-7B-Chat weights, it was observed that certain tasks such as Chinese OCR are coupled with the attention mechanism of this (VL-Chat) specific version of LLM, while other tasks like English OCR, general Chinese, and English VQA are not coupled with the LLM (based on Qwen-7B-Base).

It can be inferred that in order to achieve better scores in certain tasks, some tasks were trained on the LLM, and there is reason to believe that the LLM has degraded. I would like you to examine the benchmark test results of the LLM.

Hi @JosephusCheung , thanks for your point. We have fine-tuned the LLM during the training of Qwen-VL, so the LLM weights are different from Qwen-7B & Chat. There is the potential that the fine-tuned LLM has degraded. We will examine the benchmark test results of the fine-tuned LLM sonn.

If possible, I would like to consider incorporating these changes by using adapters for on-the-fly loading during image reasoning only, in order to modify the attention weights of LLM. I believe that from a practical standpoint, this can be done without affecting the effectiveness of text reasoning. If there is no official implementation available, I would also consider implementing it myself.

Sure! You can use LoRA or other adapter techniques. Due to the computation budget, we don't implement it. You can try it yourself and feel free to contact us with any issue.

JosephusCheung changed discussion status to closed

Sign up or log in to comment