Spaces:
Running
Running
update text
Browse files- introduction.md +12 -0
- static/img/improvements.png +0 -0
introduction.md
CHANGED
@@ -119,6 +119,18 @@ didn't go well. Eventually, the thing that worked out the best was fixing the lo
|
|
119 |
is used after the computation of the similarity between the images and the texts in CLIP (see the code [here](https://github.com/clip-italian/clip-italian/blob/master/hybrid_clip/modeling_hybrid_clip.py#L64)).
|
120 |
We got this idea from Nils' [video](https://youtu.be/RHXZKUr8qOY) on sentence embeddings.
|
121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
# Scientific Validity
|
123 |
|
124 |
## Quantitative Evaluation
|
|
|
119 |
is used after the computation of the similarity between the images and the texts in CLIP (see the code [here](https://github.com/clip-italian/clip-italian/blob/master/hybrid_clip/modeling_hybrid_clip.py#L64)).
|
120 |
We got this idea from Nils' [video](https://youtu.be/RHXZKUr8qOY) on sentence embeddings.
|
121 |
|
122 |
+
### Effect
|
123 |
+
|
124 |
+
The following picture showcase the effect that this edits have had on our loss:
|
125 |
+
|
126 |
+
<img src="https://huggingface.co/spaces/clip-italian/clip-italian-demo/raw/main/static/img/improvements.png" alt="drawing" width="600"/>
|
127 |
+
|
128 |
+
The purple line is the original training, you can see how many steps we needed to get the loss down. Yellow line is the
|
129 |
+
loss with the new optimizer, it is **striking** to see the time we save from this addition! Blue line shows the results when
|
130 |
+
fixed scaling is added with the new optimization. Finally, we added the backbone freezing part and you can see the
|
131 |
+
results in the light blue loss.
|
132 |
+
|
133 |
+
|
134 |
# Scientific Validity
|
135 |
|
136 |
## Quantitative Evaluation
|
static/img/improvements.png
ADDED