Datasets:
Tasks:
Text-to-Image
Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
other
Annotations Creators:
machine-generated
Source Datasets:
huggan/few-shot-pokemon
License:
Generation of BLIP captions
#10
by
SpiridonSunRotator
- opened
Which model did you use to generate captions, is it Salesforce/blip-image-captioning-base
?
What was the generation config - number of beams
, min/max tokens
in the caption?
Thanks in advance for the response.