from turtle import title import gradio as gr from transformers import pipeline import numpy as np from PIL import Image pipes = { "ViT/B-16": pipeline("zero-shot-image-classification", model="OFA-Sys/chinese-clip-vit-base-patch16"), "ViT/L-14": pipeline("zero-shot-image-classification", model="OFA-Sys/chinese-clip-vit-large-patch14"), "ViT/L-14@336px": pipeline("zero-shot-image-classification", model="OFA-Sys/chinese-clip-vit-large-patch14-336px"), "ViT/H-14": pipeline("zero-shot-image-classification", model="OFA-Sys/chinese-clip-vit-huge-patch14"), } inputs = [ gr.inputs.Image(type='pil'), "text", gr.inputs.Radio(choices=[ "ViT/B-16", "ViT/L-14", "ViT/L-14@336px", "ViT/H-14", ], type="value", default="ViT/B-16", label="Model"), ] images="festival.jpg" def shot(image, labels_text, model_name): labels = [label.strip(" ") for label in labels_text.strip(" ").split(",")] res = pipes[model_name](images=image, candidate_labels=labels, hypothesis_template= "一张{}的图片。") return {dic["label"]: dic["score"] for dic in res} iface = gr.Interface(shot, inputs, "label", examples=[["festival.jpg", "灯笼, 鞭炮, 对联", "ViT/B-16"], ["cat-dog-music.png", "音乐表演, 体育运动", "ViT/B-16"], ["football-match.jpg", "梅西, C罗, 马奎尔", "ViT/B-16"]], description="""

Chinese CLIP is a contrastive-learning-based vision-language foundation model pretrained on large-scale Chinese data. For more information, please refer to the paper and official github. Also, Chinese CLIP has already been merged into Huggingface Transformers!

Paper: https://arxiv.org/abs/2211.01335
Github: https://github.com/OFA-Sys/Chinese-CLIP (Welcome to star! 🔥🔥)

To play with this demo, add a picture and a list of labels in Chinese separated by commas. 上传图片,并输入多个分类标签,用英文逗号分隔。
You can duplicate this space and run it privately: Duplicate Space

""", title="Zero-shot Image Classification (中文零样本图像分类)") iface.launch()