[Demo] VLMEvalKit now supported demo and evaluation for Yi-VL

#10
by KennyUTC - opened

Codebase: https://github.com/open-compass/VLMEvalKit
Model Class: https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/yi_vl.py
Steps to run Yi-VL:

You can perform inference of Yi-VL through the following steps:
1. clone the repo https://github.com/01-ai/Yi to path-to-Yi
2. set up the environment and install the required packages in path-to-Yi/VL/requirements.txt
3. set Yi_ROOT in vlmeval/config.py 
    Yi_ROOT = path-to-Yi

You are all set now! To run a demo for Yi-VL:

from vlmeval import *
model = supported_VLM['Yi_VL_6B']()
model.generate('apple.jpg', 'What is in this image?')

To run evaluation for Yi-VL, use `python run.py --model Yi_VL_6B --data {dataset_list}`

So how is the result?

Hi, @aisensiy , here is the evaluation results of Yi-VL on 10 different benchmarks: https://openxlab.org.cn/apps/detail/kennyutc/open_mllm_leaderboard
According to our evaluation, we find that Yi-VL-6B has inferior overall performance compared to Qwen-VL and XComposer-7B. Besides, on most benchmarks, the performance of Yi-VL-34B is inferior to Yi-VL-6B.

Sign up or log in to comment