srisweet commited on
Commit
c0b17af
1 Parent(s): cdf3907

Added a simple definition of CLIP-Italian as Introduction

Browse files
Files changed (1) hide show
  1. introduction.md +2 -2
introduction.md CHANGED
@@ -1,6 +1,6 @@
1
  # CLIP-Italian
2
 
3
- CLIP ([Radford et al., 2021](https://arxiv.org/abs/2103.00020)) is an amazing model that can learn to represent images and text jointly in the same space.
4
 
5
  In this project, we aim to propose the first CLIP model trained on Italian data, that in this context can be considered a
6
  low resource language. Using a few techniques, we have been able to fine-tune a SOTA Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model
@@ -173,7 +173,7 @@ We selected two different tasks:
173
  + zero-shot classification, in which given an image and a set of captions (or labels), the model finds
174
  the best matching caption for the image
175
 
176
- ### Reproducibiliy
177
 
178
  In order to make both experiments very easy to replicate, we share the colab notebooks we used to compute the results.
179
 
 
1
  # CLIP-Italian
2
 
3
+ CLIP-Italian is a multimodal model trained on ~1.4 million Italian text-image pairs using Italian Bert model as text encoder and Vision Transformer(ViT) as image encoder.Clip-Italian (Contrastive Language-Image Pre-training in Italian language) is based on OpenAI’s CLIP ([Radford et al., 2021](https://arxiv.org/abs/2103.00020))which is an amazing model that can learn to represent images and text jointly in the same space.
4
 
5
  In this project, we aim to propose the first CLIP model trained on Italian data, that in this context can be considered a
6
  low resource language. Using a few techniques, we have been able to fine-tune a SOTA Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model
 
173
  + zero-shot classification, in which given an image and a set of captions (or labels), the model finds
174
  the best matching caption for the image
175
 
176
+ ### Reproducibility
177
 
178
  In order to make both experiments very easy to replicate, we share the colab notebooks we used to compute the results.
179