Transformers
PyTorch
flava
pretraining
Inference Endpoints
aps commited on
Commit
883af9b
1 Parent(s): 9b2aad4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -258,3 +258,7 @@ On COCO and Flickr30k retrieval, we report zero-shot accuracy, on image tasks, w
258
  ## Limitations
259
 
260
  Currently, FLAVA has many limitations. The image classification accuracy is not on par with CLIP on some of the tasks while text accuracy is not on par with BERT on some of the tasks suggesting possible room for improvement. FLAVA also doesn't work well on tasks containing scene text given the lack of scene text in most public datasets. Additionally, similar to CLIP, our approach to testing FLAVA also has an important limitation in the case of image tasks, where we use linear probes to evaluate FLAVA and there is evidence suggesting that linear probes can underestimate model performance.
 
 
 
 
 
258
  ## Limitations
259
 
260
  Currently, FLAVA has many limitations. The image classification accuracy is not on par with CLIP on some of the tasks while text accuracy is not on par with BERT on some of the tasks suggesting possible room for improvement. FLAVA also doesn't work well on tasks containing scene text given the lack of scene text in most public datasets. Additionally, similar to CLIP, our approach to testing FLAVA also has an important limitation in the case of image tasks, where we use linear probes to evaluate FLAVA and there is evidence suggesting that linear probes can underestimate model performance.
261
+
262
+ ## Feedback/Questions
263
+
264
+ Please email Amanpreet at `amanpreet [at] nyu [dot] edu` for questions.