patrickjohncyh commited on
Commit
3c4c4a1
1 Parent(s): d75585e

Fix typo in model card

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -42,4 +42,4 @@ We acknowledge certain limitations of FashionCLIP and expect that it inherits ce
42
 
43
  Our investingations also suggests that the data used introduces certain limitaions in FashionCLIP. From the textual modality, given that most captions dervied from the Farfetch dataset are long, we observe that FashionCLIP maybe more performant in longer queries than shorter ones. From the image modality, FashionCLIP is also biased towards standard product images (centered, white background).
44
 
45
- Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset usdd is relatively diverse and large.
 
42
 
43
  Our investingations also suggests that the data used introduces certain limitaions in FashionCLIP. From the textual modality, given that most captions dervied from the Farfetch dataset are long, we observe that FashionCLIP maybe more performant in longer queries than shorter ones. From the image modality, FashionCLIP is also biased towards standard product images (centered, white background).
44
 
45
+ Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset used is relatively diverse and large.