HF Hub already has several recent models that perform well on image-phrase grounding and referral expressions. Curious if we can use this RicoSCA data set to train and test performance on UI data.
Yeah, I think it would be awesome to do and would be a pretty neat tool for developers to doing things like automatic alt texts
FYTI, mashed up a RefExp formatted version based on the newer UIBert dataset:
Will play with the multimodal RefExp task that is explored in UIBert, Pix2Struct and IPAProbing papers. Happy to collaborate if anyone else is working on this already.