Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
other
Annotations Creators:
machine-generated
Tags:
License:

How to run BLIP and/or LAVIS to caption a image set?

#1
by yvblake - opened

Can you describe a few steps that I might need to do to caption a whole image set?

The huggingface model for BLIP is a nice proof of concept, but doesn't help much for doing thousands of images

Sign up or log in to comment