Cannot reproduce inference API result

#5
by aliencaocao - opened

I have an image and a bunch of labels. Using the example code (openclip), I cannot reproduce the inference API results. Not even close. Inference API is correct but my local preds are off. I am using full fp32.

This space also cannot reproduce https://huggingface.co/spaces/andsteing/lit-demo

aliencaocao changed discussion status to closed
aliencaocao changed discussion status to open
PyTorch Image Models org

The zero-shot API inference code for this model is here https://github.com/huggingface/api-inference-community/blob/48c0c4b23f4e0a571626145e5a4a6433b7d7d813/docker_images/open_clip/app/pipelines/zero_shot_image_classification.py#L49-L84

Possibly the prompts or handling of sigmoid + logits_bias isn't correct? I used a minimal set of prompts based on OpenAI's smaller subset, and the sigmoid/logit bias handling is specific to these models and different from the other CLIP models.

Yea the example code given in open clip does not include anything about prompt_templates. After adding those it is normal now. Same problem over at google's siglip release which i have raised in https://github.com/huggingface/transformers/issues/30951
Should update the example code here or at least add a note.

Sign up or log in to comment