[Demo] VLMEvalKit now supported demo and evaluation for Yi-VL

#7
by KennyUTC - opened

Codebase: https://github.com/open-compass/VLMEvalKit
Model Class: https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/yi_vl.py
Steps to run Yi-VL:

You can perform inference of Yi-VL through the following steps:
1. clone the repo https://github.com/01-ai/Yi to path-to-Yi
2. set up the environment and install the required packages in path-to-Yi/VL/requirements.txt
3. set Yi_ROOT in vlmeval/config.py 
    Yi_ROOT = path-to-Yi

You are all set now! To run a demo for Yi-VL:

from vlmeval import *
model = supported_VLM['Yi_VL_6B']()
model.generate('apple.jpg', 'What is in this image?')

To run evaluation for Yi-VL, use `python run.py --model Yi_VL_6B --data {dataset_list}`
This comment has been hidden

Sign up or log in to comment