Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
language: it
|
3 |
-
license:
|
4 |
datasets:
|
5 |
- wit
|
6 |
- ctl/conceptualCaptions
|
@@ -14,6 +14,8 @@ tags:
|
|
14 |
|
15 |
# Italian CLIP
|
16 |
|
|
|
|
|
17 |
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model is built upon the [Italian BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) model provided by [dbmdz](https://huggingface.co/dbmdz) and the OpenAI [vision transformer](https://huggingface.co/openai/clip-vit-base-patch32).
|
18 |
|
19 |
Do you want to test our model right away? We got you covered! You just need to head to our [demo application](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
|
1 |
---
|
2 |
language: it
|
3 |
+
license: gpl-3.0
|
4 |
datasets:
|
5 |
- wit
|
6 |
- ctl/conceptualCaptions
|
14 |
|
15 |
# Italian CLIP
|
16 |
|
17 |
+
Paper: [Contrastive Language-Image Pre-training for the Italian Language](https://arxiv.org/abs/2108.08688)
|
18 |
+
|
19 |
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model is built upon the [Italian BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) model provided by [dbmdz](https://huggingface.co/dbmdz) and the OpenAI [vision transformer](https://huggingface.co/openai/clip-vit-base-patch32).
|
20 |
|
21 |
Do you want to test our model right away? We got you covered! You just need to head to our [demo application](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
|