Mainak Manna commited on
Commit
943b6fc
1 Parent(s): db0fae9

First version of the model

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  datasets:
7
  - dcep europarl jrc-acquis
8
  widget:
9
- - text: "QUESTION ÉCRITE E-1180/08"
10
 
11
  ---
12
 
@@ -38,7 +38,7 @@ tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/l
38
  device=0
39
  )
40
 
41
- fr_text = "QUESTION ÉCRITE E-1180/08"
42
 
43
  pipeline([fr_text], max_length=512)
44
  ```
@@ -49,10 +49,14 @@ The legal_t5_small_trans_fr_es model was trained on [JRC-ACQUIS](https://wt-publ
49
 
50
  ## Training procedure
51
 
 
 
 
 
52
  ### Preprocessing
53
 
54
  ### Pretraining
55
- An unigram model with 88M parameters is trained over the complete parallel corpus to get the vocabulary (with byte pair encoding), which is used with this model.
56
 
57
 
58
  ## Evaluation results
6
  datasets:
7
  - dcep europarl jrc-acquis
8
  widget:
9
+ - text: "Dans le cas contraire, rien ne s'oppose donc à l'interdiction des camps de nomades en Italie, n'est-ce pas?"
10
 
11
  ---
12
 
38
  device=0
39
  )
40
 
41
+ fr_text = "Dans le cas contraire, rien ne s'oppose donc à l'interdiction des camps de nomades en Italie, n'est-ce pas?"
42
 
43
  pipeline([fr_text], max_length=512)
44
  ```
49
 
50
  ## Training procedure
51
 
52
+ An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
53
+
54
+ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
55
+
56
  ### Preprocessing
57
 
58
  ### Pretraining
59
+
60
 
61
 
62
  ## Evaluation results