aloychow commited on
Commit
ab7069b
1 Parent(s): ee6750a

Upload 8 files

Browse files
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - multilingual
4
+ - ar
5
+ - cs
6
+ - de
7
+ - en
8
+ - es
9
+ - et
10
+ - fi
11
+ - fr
12
+ - gu
13
+ - hi
14
+ - it
15
+ - ja
16
+ - kk
17
+ - ko
18
+ - lt
19
+ - lv
20
+ - my
21
+ - ne
22
+ - nl
23
+ - ro
24
+ - ru
25
+ - si
26
+ - tr
27
+ - vi
28
+ - zh
29
+ - af
30
+ - az
31
+ - bn
32
+ - fa
33
+ - he
34
+ - hr
35
+ - id
36
+ - ka
37
+ - km
38
+ - mk
39
+ - ml
40
+ - mn
41
+ - mr
42
+ - pl
43
+ - ps
44
+ - pt
45
+ - sv
46
+ - sw
47
+ - ta
48
+ - te
49
+ - th
50
+ - tl
51
+ - uk
52
+ - ur
53
+ - xh
54
+ - gl
55
+ - sl
56
+ license: mit
57
+ tags:
58
+ - mbart-50
59
  ---
60
+
61
+ # mBART-50
62
+
63
+ mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
64
+
65
+ ## Model description
66
+
67
+ mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
68
+ Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
69
+
70
+ **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
71
+ `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes,
72
+ first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
73
+ where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
74
+ 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
75
+ The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
76
+
77
+
78
+ ## Intended uses & limitations
79
+
80
+ `mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks.
81
+ See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
82
+
83
+
84
+ ## Training
85
+
86
+ As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and `lang_code` is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
87
+
88
+
89
+ ```python
90
+ from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
91
+
92
+ model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
93
+ tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
94
+
95
+ src_text = " UN Chief Says There Is No Military Solution in Syria"
96
+ tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
97
+
98
+ model_inputs = tokenizer(src_text, return_tensors="pt")
99
+ with tokenizer.as_target_tokenizer():
100
+ labels = tokenizer(tgt_text, return_tensors="pt").input_ids
101
+
102
+ model(**model_inputs, labels=labels) # forward pass
103
+ ```
104
+
105
+
106
+
107
+ ## Languages covered
108
+ Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
109
+
110
+
111
+ ## BibTeX entry and citation info
112
+ ```
113
+ @article{tang2020multilingual,
114
+ title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
115
+ author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
116
+ year={2020},
117
+ eprint={2008.00401},
118
+ archivePrefix={arXiv},
119
+ primaryClass={cs.CL}
120
+ }
121
+ ```
config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
3
+ "_num_labels": 3,
4
+ "activation_dropout": 0.0,
5
+ "activation_function": "gelu",
6
+ "add_bias_logits": false,
7
+ "add_final_layer_norm": true,
8
+ "architectures": [
9
+ "MBartForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.0,
12
+ "bos_token_id": 0,
13
+ "classif_dropout": 0.0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 12,
20
+ "decoder_start_token_id": 2,
21
+ "dropout": 0.1,
22
+ "early_stopping": true,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 12,
27
+ "eos_token_id": 2,
28
+ "forced_eos_token_id": 2,
29
+ "gradient_checkpointing": false,
30
+ "id2label": {
31
+ "0": "LABEL_0",
32
+ "1": "LABEL_1",
33
+ "2": "LABEL_2"
34
+ },
35
+ "init_std": 0.02,
36
+ "is_encoder_decoder": true,
37
+ "label2id": {
38
+ "LABEL_0": 0,
39
+ "LABEL_1": 1,
40
+ "LABEL_2": 2
41
+ },
42
+ "max_length": 200,
43
+ "max_position_embeddings": 1024,
44
+ "model_type": "mbart",
45
+ "normalize_before": true,
46
+ "normalize_embedding": true,
47
+ "num_beams": 5,
48
+ "num_hidden_layers": 12,
49
+ "output_past": true,
50
+ "pad_token_id": 1,
51
+ "scale_embedding": true,
52
+ "static_position_embeddings": false,
53
+ "transformers_version": "4.4.0.dev0",
54
+ "use_cache": true,
55
+ "vocab_size": 250054,
56
+ "tokenizer_class": "MBart50Tokenizer"
57
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "early_stopping": true,
6
+ "eos_token_id": 2,
7
+ "forced_eos_token_id": 2,
8
+ "max_length": 200,
9
+ "num_beams": 5,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.27.0.dev0"
12
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee44fb4c607895f34b49c65091e62ba0b15b2cac5eac0c75f2c6c9dbce3cb10e
3
+ size 2444714899
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": "<mask>", "additional_special_tokens": ["ar_AR", "cs_CZ", "de_DE", "en_XX", "es_XX", "et_EE", "fi_FI", "fr_XX", "gu_IN", "hi_IN", "it_IT", "ja_XX", "kk_KZ", "ko_KR", "lt_LT", "lv_LV", "my_MM", "ne_NP", "nl_XX", "ro_RO", "ru_RU", "si_LK", "tr_TR", "vi_VN", "zh_CN", "af_ZA", "az_AZ", "bn_IN", "fa_IR", "he_IL", "hr_HR", "id_ID", "ka_GE", "km_KH", "mk_MK", "ml_IN", "mn_MN", "mr_IN", "pl_PL", "ps_AF", "pt_XX", "sv_SE", "sw_KE", "ta_IN", "te_IN", "th_TH", "tl_XX", "uk_UA", "ur_PK", "xh_ZA", "gl_ES", "sl_SI"]}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3476416edfc8ad7f9be6677ad3287274aede8ba9a0f054bf950a7ef2035cd12
3
+ size 2445080688
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"src_lang": null, "tgt_lang": null, "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": "<s>", "tokenizer_file": null, "model_max_length": 1024, "name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large", "special_tokens_map_file": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large/special_tokens_map.json"}