radinplaid commited on
Commit
1e18133
·
verified ·
1 Parent(s): 41b2fa9

Upload folder using huggingface_hub

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - el
5
+ tags:
6
+ - translation
7
+ license: cc-by-4.0
8
+ datasets:
9
+ - quickmt/quickmt-train.el-en
10
+ model-index:
11
+ - name: quickmt-el-en
12
+ results:
13
+ - task:
14
+ name: Translation ell-eng
15
+ type: translation
16
+ args: ell-eng
17
+ dataset:
18
+ name: flores101-devtest
19
+ type: flores_101
20
+ args: ell_Grek eng_Latn devtest
21
+ metrics:
22
+ - name: BLEU
23
+ type: bleu
24
+ value: 34.93
25
+ - name: CHRF
26
+ type: chrf
27
+ value: 61.4
28
+ - name: COMET
29
+ type: comet
30
+ value: 87.09
31
+ ---
32
+
33
+
34
+ # `quickmt-el-en` Neural Machine Translation Model
35
+
36
+ `quickmt-el-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `el` into `en`.
37
+
38
+
39
+ ## Try it on our Huggingface Space
40
+
41
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
42
+
43
+
44
+ ## Model Information
45
+
46
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
47
+ * 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
48
+ * 20k separate Sentencepiece vocabs
49
+ * Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
50
+ * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.el-en/tree/main
51
+
52
+ See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
53
+
54
+
55
+ ## Usage with `quickmt`
56
+
57
+ You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
+
59
+ Next, install the `quickmt` python library and download the model:
60
+
61
+ ```bash
62
+ git clone https://github.com/quickmt/quickmt.git
63
+ pip install ./quickmt/
64
+
65
+ quickmt-model-download quickmt/quickmt-el-en ./quickmt-el-en
66
+ ```
67
+
68
+ Finally use the model in python:
69
+
70
+ ```python
71
+ from quickmt impest Translator
72
+
73
+ # Auto-detects GPU, set to "cpu" to force CPU inference
74
+ t = Translator("./quickmt-el-en/", device="auto")
75
+
76
+ # Translate - set beam size to 1 for faster speed (but lower quality)
77
+ sample_text = 'Ο Δρ Έχουντ Ουρ, καθηγητής ιατρικής του Πανεπιστημίου Νταλουζί στο Χάλιφαξ της Νέας Σκωτίας και πρόεδρος του κλινικού και επιστημονικού τμήματος της Καναδικής Ένωσης Διαβήτη επεσήμανε ότι η έρευνα βρίσκεται ακόμη σε αρχικό στάδιο.'
78
+
79
+ t(sample_text, beam_size=5)
80
+ ```
81
+
82
+ > 'Dr. Ehud Ur, a professor of medicine at Dalouzi University in Halifax, Nova Scotia and president of the clinical and scientific division of the Canadian Diabetes Association, said the research is still in its early stages.'
83
+
84
+ ```python
85
+ # Get alternative translations by sampling
86
+ # You can pass any cTranslate2 `translate_batch` arguments
87
+ t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
88
+ ```
89
+
90
+ > 'Dr. Evet Ur, Professor of Medicine at Dalusi University in Halifax, Nova Scotia and Chairman of Clinical and Scientific Department of the Canadian Diabetes Association, said the research was still at an early stage.'
91
+
92
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
93
+
94
+
95
+ ## Metrics
96
+
97
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("ell_Grek"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size).
98
+
99
+
100
+
101
+ ## el -> en flores-devtest metrics
102
+
103
+ | | bleu | chrf2 | comet22 | Time (s) |
104
+ |:----------------------------------|-------:|--------:|----------:|-----------:|
105
+ | quickmt/quickmt-el-en | 34.93 | 61.4 | 87.09 | 1.55 |
106
+ | Helsinki-NLP/opus-mt-tc-big-el-en | 34.3 | 61.45 | 86.86 | 3.92 |
107
+ | facebook/nllb-200-distilled-600M | 34.75 | 60.86 | 86.79 | 23.01 |
108
+ | facebook/nllb-200-distilled-1.3B | 37.59 | 63.22 | 87.85 | 41.7 |
109
+ | facebook/m2m100_418M | 27.26 | 55.95 | 83.17 | 20.67 |
110
+ | facebook/m2m100_1.2B | 33.21 | 60.22 | 86.35 | 38.88 |
README.md CHANGED
@@ -1,3 +1,110 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - el
5
+ tags:
6
+ - translation
7
+ license: cc-by-4.0
8
+ datasets:
9
+ - quickmt/quickmt-train.el-en
10
+ model-index:
11
+ - name: quickmt-el-en
12
+ results:
13
+ - task:
14
+ name: Translation ell-eng
15
+ type: translation
16
+ args: ell-eng
17
+ dataset:
18
+ name: flores101-devtest
19
+ type: flores_101
20
+ args: ell_Grek eng_Latn devtest
21
+ metrics:
22
+ - name: BLEU
23
+ type: bleu
24
+ value: 34.93
25
+ - name: CHRF
26
+ type: chrf
27
+ value: 61.4
28
+ - name: COMET
29
+ type: comet
30
+ value: 87.09
31
+ ---
32
+
33
+
34
+ # `quickmt-el-en` Neural Machine Translation Model
35
+
36
+ `quickmt-el-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `el` into `en`.
37
+
38
+
39
+ ## Try it on our Huggingface Space
40
+
41
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
42
+
43
+
44
+ ## Model Information
45
+
46
+ * Trained using [`eole`](https://github.com/eole-nlp/eole)
47
+ * 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
48
+ * 20k separate Sentencepiece vocabs
49
+ * Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
50
+ * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.el-en/tree/main
51
+
52
+ See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
53
+
54
+
55
+ ## Usage with `quickmt`
56
+
57
+ You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
+
59
+ Next, install the `quickmt` python library and download the model:
60
+
61
+ ```bash
62
+ git clone https://github.com/quickmt/quickmt.git
63
+ pip install ./quickmt/
64
+
65
+ quickmt-model-download quickmt/quickmt-el-en ./quickmt-el-en
66
+ ```
67
+
68
+ Finally use the model in python:
69
+
70
+ ```python
71
+ from quickmt impest Translator
72
+
73
+ # Auto-detects GPU, set to "cpu" to force CPU inference
74
+ t = Translator("./quickmt-el-en/", device="auto")
75
+
76
+ # Translate - set beam size to 1 for faster speed (but lower quality)
77
+ sample_text = 'Ο Δρ Έχουντ Ουρ, καθηγητής ιατρικής του Πανεπιστημίου Νταλουζί στο Χάλιφαξ της Νέας Σκωτίας και πρόεδρος του κλινικού και επιστημονικού τμήματος της Καναδικής Ένωσης Διαβήτη επεσήμανε ότι η έρευνα βρίσκεται ακόμη σε αρχικό στάδιο.'
78
+
79
+ t(sample_text, beam_size=5)
80
+ ```
81
+
82
+ > 'Dr. Ehud Ur, a professor of medicine at Dalouzi University in Halifax, Nova Scotia and president of the clinical and scientific division of the Canadian Diabetes Association, said the research is still in its early stages.'
83
+
84
+ ```python
85
+ # Get alternative translations by sampling
86
+ # You can pass any cTranslate2 `translate_batch` arguments
87
+ t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
88
+ ```
89
+
90
+ > 'Dr. Evet Ur, Professor of Medicine at Dalusi University in Halifax, Nova Scotia and Chairman of Clinical and Scientific Department of the Canadian Diabetes Association, said the research was still at an early stage.'
91
+
92
+ The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
93
+
94
+
95
+ ## Metrics
96
+
97
+ `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("ell_Grek"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size).
98
+
99
+
100
+
101
+ ## el -> en flores-devtest metrics
102
+
103
+ | | bleu | chrf2 | comet22 | Time (s) |
104
+ |:----------------------------------|-------:|--------:|----------:|-----------:|
105
+ | quickmt/quickmt-el-en | 34.93 | 61.4 | 87.09 | 1.55 |
106
+ | Helsinki-NLP/opus-mt-tc-big-el-en | 34.3 | 61.45 | 86.86 | 3.92 |
107
+ | facebook/nllb-200-distilled-600M | 34.75 | 60.86 | 86.79 | 23.01 |
108
+ | facebook/nllb-200-distilled-1.3B | 37.59 | 63.22 | 87.85 | 41.7 |
109
+ | facebook/m2m100_418M | 27.26 | 55.95 | 83.17 | 20.67 |
110
+ | facebook/m2m100_1.2B | 33.21 | 60.22 | 86.35 | 38.88 |
config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": 1e-06,
8
+ "multi_query_attention": false,
9
+ "unk_token": "<unk>"
10
+ }
ct2-elen/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": 1e-06,
8
+ "multi_query_attention": false,
9
+ "unk_token": "<unk>"
10
+ }
ct2-elen/eole-config.yaml ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## IO
2
+ save_data: data
3
+ overwrite: True
4
+ seed: 1234
5
+ report_every: 100
6
+ valid_metrics: ["BLEU"]
7
+ tensorboard: true
8
+ tensorboard_log_dir: tensorboard
9
+
10
+ ### Vocab
11
+ src_vocab: el.eole.vocab
12
+ tgt_vocab: en.eole.vocab
13
+ src_vocab_size: 20000
14
+ tgt_vocab_size: 20000
15
+ vocab_size_multiple: 8
16
+ share_vocab: false
17
+ n_sample: 0
18
+
19
+ data:
20
+ corpus_1:
21
+ # path_src: hf://quickmt/quickmt-train.el-en/el
22
+ # path_tgt: hf://quickmt/quickmt-train.el-en/en
23
+ # path_sco: hf://quickmt/quickmt-train.el-en/sco
24
+ path_src: train.el
25
+ path_tgt: train.en
26
+ valid:
27
+ path_src: valid.el
28
+ path_tgt: valid.en
29
+
30
+ transforms: [sentencepiece, filtertoolong]
31
+ transforms_configs:
32
+ sentencepiece:
33
+ src_subword_model: "el.spm.model"
34
+ tgt_subword_model: "en.spm.model"
35
+ filtertoolong:
36
+ src_seq_length: 256
37
+ tgt_seq_length: 256
38
+
39
+ training:
40
+ # Run configuration
41
+ model_path: quickmt-el-en-eole-model
42
+ #train_from: model
43
+ keep_checkpoint: 4
44
+ train_steps: 100000
45
+ save_checkpoint_steps: 5000
46
+ valid_steps: 5000
47
+
48
+ # Train on a single GPU
49
+ world_size: 1
50
+ gpu_ranks: [0]
51
+
52
+ # Batching 10240
53
+ batch_type: "tokens"
54
+ batch_size: 8000
55
+ valid_batch_size: 4096
56
+ batch_size_multiple: 8
57
+ accum_count: [10]
58
+ accum_steps: [0]
59
+
60
+ # Optimizer & Compute
61
+ compute_dtype: "fp16"
62
+ optim: "adamw"
63
+ #use_amp: False
64
+ learning_rate: 2.0
65
+ warmup_steps: 4000
66
+ decay_method: "noam"
67
+ adam_beta2: 0.998
68
+
69
+ # Data loading
70
+ bucket_size: 128000
71
+ num_workers: 4
72
+ prefetch_factor: 32
73
+
74
+ # Hyperparams
75
+ dropout_steps: [0]
76
+ dropout: [0.1]
77
+ attention_dropout: [0.1]
78
+ max_grad_norm: 0
79
+ label_smoothing: 0.1
80
+ average_decay: 0.0001
81
+ param_init_method: xavier_uniform
82
+ normalization: "tokens"
83
+
84
+ model:
85
+ architecture: "transformer"
86
+ share_embeddings: false
87
+ share_decoder_embeddings: false
88
+ hidden_size: 1024
89
+ encoder:
90
+ layers: 8
91
+ decoder:
92
+ layers: 2
93
+ heads: 8
94
+ transformer_ff: 4096
95
+ embeddings:
96
+ word_vec_size: 1024
97
+ position_encoding_type: "SinusoidalInterleaved"
98
+
ct2-elen/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f36d96759aa1c73a9717a55bd4ee92285767ae338c06e80cfea406f55addb548
3
+ size 401699775
ct2-elen/source_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
ct2-elen/src.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba83bbe4b5aec0bebb3cb9651f0a5ea609bec129ff834c1934c2eeb6be7bfcdc
3
+ size 704100
ct2-elen/target_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
ct2-elen/tgt.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:327b717adc21439f5e70ee9bf1a7a6d6668f21045f05d94175e61f5554860563
3
+ size 587829
eole-config.yaml ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## IO
2
+ save_data: data
3
+ overwrite: True
4
+ seed: 1234
5
+ report_every: 100
6
+ valid_metrics: ["BLEU"]
7
+ tensorboard: true
8
+ tensorboard_log_dir: tensorboard
9
+
10
+ ### Vocab
11
+ src_vocab: el.eole.vocab
12
+ tgt_vocab: en.eole.vocab
13
+ src_vocab_size: 20000
14
+ tgt_vocab_size: 20000
15
+ vocab_size_multiple: 8
16
+ share_vocab: false
17
+ n_sample: 0
18
+
19
+ data:
20
+ corpus_1:
21
+ # path_src: hf://quickmt/quickmt-train.el-en/el
22
+ # path_tgt: hf://quickmt/quickmt-train.el-en/en
23
+ # path_sco: hf://quickmt/quickmt-train.el-en/sco
24
+ path_src: train.el
25
+ path_tgt: train.en
26
+ valid:
27
+ path_src: valid.el
28
+ path_tgt: valid.en
29
+
30
+ transforms: [sentencepiece, filtertoolong]
31
+ transforms_configs:
32
+ sentencepiece:
33
+ src_subword_model: "el.spm.model"
34
+ tgt_subword_model: "en.spm.model"
35
+ filtertoolong:
36
+ src_seq_length: 256
37
+ tgt_seq_length: 256
38
+
39
+ training:
40
+ # Run configuration
41
+ model_path: quickmt-el-en-eole-model
42
+ #train_from: model
43
+ keep_checkpoint: 4
44
+ train_steps: 100000
45
+ save_checkpoint_steps: 5000
46
+ valid_steps: 5000
47
+
48
+ # Train on a single GPU
49
+ world_size: 1
50
+ gpu_ranks: [0]
51
+
52
+ # Batching 10240
53
+ batch_type: "tokens"
54
+ batch_size: 8000
55
+ valid_batch_size: 4096
56
+ batch_size_multiple: 8
57
+ accum_count: [10]
58
+ accum_steps: [0]
59
+
60
+ # Optimizer & Compute
61
+ compute_dtype: "fp16"
62
+ optim: "adamw"
63
+ #use_amp: False
64
+ learning_rate: 2.0
65
+ warmup_steps: 4000
66
+ decay_method: "noam"
67
+ adam_beta2: 0.998
68
+
69
+ # Data loading
70
+ bucket_size: 128000
71
+ num_workers: 4
72
+ prefetch_factor: 32
73
+
74
+ # Hyperparams
75
+ dropout_steps: [0]
76
+ dropout: [0.1]
77
+ attention_dropout: [0.1]
78
+ max_grad_norm: 0
79
+ label_smoothing: 0.1
80
+ average_decay: 0.0001
81
+ param_init_method: xavier_uniform
82
+ normalization: "tokens"
83
+
84
+ model:
85
+ architecture: "transformer"
86
+ share_embeddings: false
87
+ share_decoder_embeddings: false
88
+ hidden_size: 1024
89
+ encoder:
90
+ layers: 8
91
+ decoder:
92
+ layers: 2
93
+ heads: 8
94
+ transformer_ff: 4096
95
+ embeddings:
96
+ word_vec_size: 1024
97
+ position_encoding_type: "SinusoidalInterleaved"
98
+
eole-model/config.json ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "overwrite": true,
3
+ "src_vocab": "el.eole.vocab",
4
+ "tensorboard_log_dir_dated": "tensorboard/Sep-04_20-55-10",
5
+ "tgt_vocab_size": 20000,
6
+ "seed": 1234,
7
+ "tensorboard": true,
8
+ "valid_metrics": [
9
+ "BLEU"
10
+ ],
11
+ "share_vocab": false,
12
+ "tensorboard_log_dir": "tensorboard",
13
+ "report_every": 100,
14
+ "n_sample": 0,
15
+ "save_data": "data",
16
+ "tgt_vocab": "en.eole.vocab",
17
+ "transforms": [
18
+ "sentencepiece",
19
+ "filtertoolong"
20
+ ],
21
+ "src_vocab_size": 20000,
22
+ "vocab_size_multiple": 8,
23
+ "training": {
24
+ "num_workers": 0,
25
+ "average_decay": 0.0001,
26
+ "dropout_steps": [
27
+ 0
28
+ ],
29
+ "normalization": "tokens",
30
+ "compute_dtype": "torch.float16",
31
+ "train_steps": 100000,
32
+ "batch_type": "tokens",
33
+ "decay_method": "noam",
34
+ "bucket_size": 128000,
35
+ "label_smoothing": 0.1,
36
+ "model_path": "quickmt-el-en-eole-model",
37
+ "batch_size_multiple": 8,
38
+ "accum_count": [
39
+ 10
40
+ ],
41
+ "adam_beta2": 0.998,
42
+ "save_checkpoint_steps": 5000,
43
+ "valid_batch_size": 4096,
44
+ "batch_size": 8000,
45
+ "keep_checkpoint": 4,
46
+ "accum_steps": [
47
+ 0
48
+ ],
49
+ "max_grad_norm": 0.0,
50
+ "warmup_steps": 4000,
51
+ "world_size": 1,
52
+ "learning_rate": 2.0,
53
+ "valid_steps": 5000,
54
+ "prefetch_factor": 32,
55
+ "optim": "adamw",
56
+ "attention_dropout": [
57
+ 0.1
58
+ ],
59
+ "dropout": [
60
+ 0.1
61
+ ],
62
+ "gpu_ranks": [
63
+ 0
64
+ ],
65
+ "param_init_method": "xavier_uniform"
66
+ },
67
+ "data": {
68
+ "corpus_1": {
69
+ "path_src": "train.el",
70
+ "transforms": [
71
+ "sentencepiece",
72
+ "filtertoolong"
73
+ ],
74
+ "path_align": null,
75
+ "path_tgt": "train.en"
76
+ },
77
+ "valid": {
78
+ "path_src": "valid.el",
79
+ "transforms": [
80
+ "sentencepiece",
81
+ "filtertoolong"
82
+ ],
83
+ "path_align": null,
84
+ "path_tgt": "valid.en"
85
+ }
86
+ },
87
+ "model": {
88
+ "heads": 8,
89
+ "share_embeddings": false,
90
+ "position_encoding_type": "SinusoidalInterleaved",
91
+ "architecture": "transformer",
92
+ "hidden_size": 1024,
93
+ "share_decoder_embeddings": false,
94
+ "transformer_ff": 4096,
95
+ "encoder": {
96
+ "heads": 8,
97
+ "src_word_vec_size": 1024,
98
+ "position_encoding_type": "SinusoidalInterleaved",
99
+ "encoder_type": "transformer",
100
+ "hidden_size": 1024,
101
+ "n_positions": null,
102
+ "transformer_ff": 4096,
103
+ "layers": 8
104
+ },
105
+ "embeddings": {
106
+ "position_encoding_type": "SinusoidalInterleaved",
107
+ "src_word_vec_size": 1024,
108
+ "word_vec_size": 1024,
109
+ "tgt_word_vec_size": 1024
110
+ },
111
+ "decoder": {
112
+ "heads": 8,
113
+ "position_encoding_type": "SinusoidalInterleaved",
114
+ "tgt_word_vec_size": 1024,
115
+ "hidden_size": 1024,
116
+ "n_positions": null,
117
+ "decoder_type": "transformer",
118
+ "transformer_ff": 4096,
119
+ "layers": 2
120
+ }
121
+ },
122
+ "transforms_configs": {
123
+ "sentencepiece": {
124
+ "src_subword_model": "${MODEL_PATH}/el.spm.model",
125
+ "tgt_subword_model": "${MODEL_PATH}/en.spm.model"
126
+ },
127
+ "filtertoolong": {
128
+ "tgt_seq_length": 256,
129
+ "src_seq_length": 256
130
+ }
131
+ }
132
+ }
eole-model/el.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba83bbe4b5aec0bebb3cb9651f0a5ea609bec129ff834c1934c2eeb6be7bfcdc
3
+ size 704100
eole-model/en.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:327b717adc21439f5e70ee9bf1a7a6d6668f21045f05d94175e61f5554860563
3
+ size 587829
eole-model/model.00.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0eaab20d8c3832df34eb98cd7a975ed31643fb158fbc5df8f07310a27334778e
3
+ size 823882912
eole-model/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0007d3c944501d8a6f5ff57d19a68aeb69265eae3c267acd857374f979ded068
3
+ size 401699775
source_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
src.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba83bbe4b5aec0bebb3cb9651f0a5ea609bec129ff834c1934c2eeb6be7bfcdc
3
+ size 704100
target_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
tgt.spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:327b717adc21439f5e70ee9bf1a7a6d6668f21045f05d94175e61f5554860563
3
+ size 587829