Edit model card

Italian T5 Cased Small Efficient EL32 ๐Ÿ‡ฎ๐Ÿ‡น

Shout-out to Stefan Schweter for contributing this model! This is a copy of stefan-it/it5-efficient-small-el32 moved and documented for ease of usage

The IT5 model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original T5 model. Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an optimized model architecture to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.

This model is released as part of the project "IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation", by Gabriele Sarti and Malvina Nissim with the support of Huggingface and with TPU usage sponsored by Google's TPU Research Cloud. All the training was conducted on a single TPU3v8-VM machine on Google Cloud. A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.

TThe inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The models in the it5 organization provide some examples of this model fine-tuned on various downstream task.

Using the models

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32")

Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example here.

Limitations

Due to the nature of the web-scraped corpus on which IT5 models were trained, it is likely that their usage could reproduce and amplify pre-existing biases in the data, resulting in potentially harmful content such as racial or gender stereotypes and conspiracist views. For this reason, the study of such biases is explicitly encouraged, and model usage should ideally be restricted to research-oriented and non-user-facing endeavors.

Model curators

For problems or updates on this model, please contact gabriele.sarti996@gmail.com.

Citation Information

@article{sarti-nissim-2022-it5,
    title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
    author={Sarti, Gabriele and Nissim, Malvina},
    journal={ArXiv preprint 2203.03759},
    url={https://arxiv.org/abs/2203.03759},
    year={2022},
    month={mar}
}
Downloads last month
5
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train gsarti/it5-efficient-small-el32

Collection including gsarti/it5-efficient-small-el32