Easy evaluation of the zero-shot capabilities of laion CLIP

#6
by fhvilshoj - opened

Hi!
Thanks for your big efforts in training this model. It's truly helping push forward the field of AI!

I wanted to show how we've evaluated the model against a bunch of others on a handful of medical datasets.

medical-linear-probe-accuracy.png

We did it with this repo: https://github.com/encord-team/text-to-image-eval

Would be curious to hear how this stacks up against your findings!

LAION eV org

@fhvilshoj thanks for the share! I surprised there isn't more of a gap for skin cancer & xray btw big and small models and also the specifically med fine-tuned and pretrain LAION/etc scores.

A FYI on B/32, the best B/32 model is https://huggingface.co/laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K ... should be better than this LAION2B one...

Thanks for the feedback! Yes, it is rather surprising. One of the things that the plot seems to suggest is that the gains from specialization are limited for "easier" datasets and only when tasks become more niche like lung cancer types, the gains are going to make an actual difference.

I'll take that back in the kitchen for a spin! 🛵

Sign up or log in to comment