File size: 3,082 Bytes
5a6aa73
 
 
 
 
 
 
e6a0b4f
7fb8a35
08e2dd1
5a6aa73
 
 
 
d76ba5b
5a6aa73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7fb8a35
5a6aa73
 
 
 
 
 
 
 
 
 
cf37fcf
 
5a6aa73
 
7fb8a35
 
5a6aa73
cf37fcf
5a6aa73
 
 
 
 
 
 
 
b7bdb1c
5a6aa73
 
 
 
 
68357c5
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76

---
language: French Deustch  
tags:
- translation French Deustch  model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."

---

# legal_t5_small_trans_fr_de model

Model on translating legal text from French to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.


## Model description

legal_t5_small_trans_fr_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.

## Intended uses & limitations

The model could be used for translation of legal texts from French to Deustch.

### How to use

Here is how to use this model to translate legal text from French to Deustch in PyTorch:

```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline

pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False, 
                                            skip_special_tokens=True),
    device=0
)

fr_text = "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."

pipeline([fr_text], max_length=512)
```

## Training data

The legal_t5_small_trans_fr_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.

## Training procedure

The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.

### Preprocessing

An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.

### Pretraining



## Evaluation results

When the model is used for translation test dataset, achieves the following results:

Test results :

| Model | BLEU score |
|:-----:|:-----:|
|   legal_t5_small_trans_fr_de | 41.33|


### BibTeX entry and citation info

> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)