Mainak Manna
First version of the model
41f99f8
metadata
language: Italian Swedish
tags:
  - translation Italian Swedish  model
datasets:
  - dcep europarl jrc-acquis
widget:
  - text: >-
      K. considerando che, come avviene con tutti i sistemi di sanità
      elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi
      abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei
      professionisti sanitari, dei pazienti e delle commissioni competenti (per
      esempio, sulla protezione dei dati e sull’etica),

legal_t5_small_trans_it_sv model

Model on translating legal text from Italian to Swedish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.

Model description

legal_t5_small_trans_it_sv is based on the t5-small model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using dmodel = 512, dff = 2,048, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.

Intended uses & limitations

The model could be used for translation of legal texts from Italian to Swedish.

How to use

Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:

from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline

pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False, 
                                            skip_special_tokens=True),
    device=0
)

it_text = "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica),"

pipeline([it_text], max_length=512)

Training data

The legal_t5_small_trans_it_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.

Training procedure

The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.

Preprocessing

An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.

Pretraining

Evaluation results

When the model is used for translation test dataset, achieves the following results:

Test results :

Model BLEU score
legal_t5_small_trans_it_sv 41.508

BibTeX entry and citation info

Created by Ahmed Elnaggar/@Elnaggar_AI | LinkedIn