CLIP-Italian

CLIP Italian is a CLIP-like Model for Italian. The CLIP model (Contrastive Language–Image Pre-training) was developed by researchers at OpenAI and is able to efficiently learn visual concepts from natural language supervision.

We fine-tuned a competitive Italian CLIP model with only ~1.4 million Italian image-text pairs. This model is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.

Training Data

We considered three main sources of data:

Training Procedure

Preprocessing, hardware used, hyperparameters...

Evaluation Performance

Limitations

Usage

Team members

Useful links

Downloads last month
9
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .