File size: 2,795 Bytes
e8b1d42
 
 
 
 
 
 
363175c
a22f247
6584f9c
e8b1d42
 
 
 
a5dcbb9
e8b1d42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a22f247
e8b1d42
 
 
 
 
 
 
 
 
 
7426f92
 
e8b1d42
 
d9aaac3
 
e8b1d42
7426f92
e8b1d42
 
 
 
 
 
 
 
5433e59
e8b1d42
a22f247
e8b1d42
 
 
48c7af3
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76

---
language: Italian Cszech  
tags:
- translation Italian Cszech  model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "sull'aumento dei prezzi dei prodotti alimentari"

---

# legal_t5_small_trans_it_cs model

Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.


## Model description

legal_t5_small_trans_it_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.

## Intended uses & limitations

The model could be used for translation of legal texts from Italian to Cszech.

### How to use

Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:

```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline

pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False, 
                                            skip_special_tokens=True),
    device=0
)

it_text = "sull'aumento dei prezzi dei prodotti alimentari"

pipeline([it_text], max_length=512)
```

## Training data

The legal_t5_small_trans_it_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.

## Training procedure

The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.

### Preprocessing

An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.

### Pretraining



## Evaluation results

When the model is used for translation test dataset, achieves the following results:

Test results :

| Model | BLEU score |
|:-----:|:-----:|
|   legal_t5_small_trans_it_cs | 43.302|


### BibTeX entry and citation info

> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)