clip-italian-final / README.md
Silvia Terragni
Create README.md
931590a
metadata
language: it
license: null
datasets:
  - wit
  - ctl/conceptualCaptions
  - mscoco-it
tags:
  - italian
  - bert
  - vit
  - vision

CLIP-Italian

CLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision.

We fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.

Training Data

We considered three main sources of data:

Training Procedure

Preprocessing, hardware used, hyperparameters...

Evaluation Performance

Limitations

Usage

Team members

Useful links