Sujit Pal commited on
Commit
c0c0d12
1 Parent(s): 2a06c48

fix: added link to project

Browse files
dashboard_featurefinder.py CHANGED
@@ -70,7 +70,7 @@ def app():
70
  contrastive learning to project images and caption text onto a common
71
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
72
  using the RSICD dataset (10k images and ~50k captions from the remote
73
- sensing domain).
74
 
75
  This demo shows the ability of the model to find specific features
76
  (specified as text queries) in the image. As an example, say you wish to
 
70
  contrastive learning to project images and caption text onto a common
71
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
72
  using the RSICD dataset (10k images and ~50k captions from the remote
73
+ sensing domain). Click here for [more information about our project](https://github.com/arampacha/CLIP-rsicd).
74
 
75
  This demo shows the ability of the model to find specific features
76
  (specified as text queries) in the image. As an example, say you wish to
dashboard_image2image.py CHANGED
@@ -50,7 +50,7 @@ def app():
50
  contrastive learning to project images and caption text onto a common
51
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
52
  using the RSICD dataset (10k images and ~50k captions from the remote
53
- sensing domain).
54
 
55
  This demo shows the image to image retrieval capabilities of this model, i.e.,
56
  given an image file name as a query, we use our fine-tuned CLIP model
 
50
  contrastive learning to project images and caption text onto a common
51
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
52
  using the RSICD dataset (10k images and ~50k captions from the remote
53
+ sensing domain). Click here for [more information about our project](https://github.com/arampacha/CLIP-rsicd).
54
 
55
  This demo shows the image to image retrieval capabilities of this model, i.e.,
56
  given an image file name as a query, we use our fine-tuned CLIP model
dashboard_text2image.py CHANGED
@@ -30,7 +30,7 @@ def app():
30
  contrastive learning to project images and caption text onto a common
31
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
32
  using the RSICD dataset (10k images and ~50k captions from the remote
33
- sensing domain).
34
 
35
  This demo shows the image to text retrieval capabilities of this model, i.e.,
36
  given a text query, we use our fine-tuned CLIP model to project the text query
 
30
  contrastive learning to project images and caption text onto a common
31
  embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
32
  using the RSICD dataset (10k images and ~50k captions from the remote
33
+ sensing domain). Click here for [more information about our project](https://github.com/arampacha/CLIP-rsicd).
34
 
35
  This demo shows the image to text retrieval capabilities of this model, i.e.,
36
  given a text query, we use our fine-tuned CLIP model to project the text query