huangzixian commited on
Commit
8924ae7
•
1 Parent(s): 35ace76

update readme

Browse files
Files changed (1) hide show
  1. README.md +16 -17
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ### Model Sources
2
 
3
- Paper: "LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages"
4
 
5
- Link: https://arxiv.org/pdf/2407
6
 
7
- Repository: https://github.com/CONE-MT/
8
 
9
  ### Model Description
10
 
@@ -17,7 +17,7 @@ We collected extensive training sets in 102 languages for continued pre-training
17
  LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
18
 
19
  ```angular2html
20
- def prompt_template(query, src_language, trg_language):
21
  instruction = f'Translate the following sentences from {src_language} to {trg_language}.'
22
  prompt = (
23
  'Below is an instruction that describes a task, paired with an input that provides further context. '
@@ -46,21 +46,20 @@ tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokeniza
46
 
47
 
48
  ### 🔥 Excellent Translation Performance
 
49
 
50
- LLaMAX2-Alpaca achieves an average spBLEU score improvement of over **10 points** compared to the LLaMA2-Alpaca model on the Flores-101 dataset.
51
-
52
- | System | Size | en-X (COMET) | en-X (BLEU) | zh-X (COMET)| zh-X (BLEU) | de-X (COMET) | de-X (BLEU) | ne-X (COMET) | ne-X (BLEU) |ar-X (COMET) | ar-X (BLEU) | az-X (COMET) | az-X (BLEU) | ceb-X (COMET) | ceb-X (BLEU)|
53
- |---------------|------|--------------------|-------------| ----| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
54
- | LLaMA2-Alpaca | 7B | 52.83 | 9.44 | 51.29 | 3.80 | 51.47 | 6.82 | 46.59 | 1.31 | 46.76 | 2.84 | 48.63 | 1.36 | 41.02 | 2.69 |
55
- | LLaMA2-Alpaca | 13B | 57.16 | 11.85 | 53.93 | 6.25 | 54.70 | 9.42 | 51.47 | 3.11 | 50.73 | 5.23 | 50.68 | 2.74 | 47.86 | 4.96 |
56
- | LLaMAX2-Alpaca| 7B | 76.66 | 23.17 | 73.54 | 14.17 | 73.82 | 18.96 | 74.64 | 14.49 | 72.00 | 15.82 | 70.91 | 11.34 | 68.67 | 15.53 |
57
 
58
 
59
  | System | Size | X-en (COMET) | X-en (BLEU) | X-zh (COMET)| X-zh (BLEU) | X-de (COMET) | X-de (BLEU) | X-ne (COMET) | X-ne (BLEU) |X-ar (COMET) | X-ar (BLEU) | X-az (COMET) | X-az (BLEU) | X-ceb (COMET) | X-ceb (BLEU) |
60
  |---------------|------|----------------|-------------| ----| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |--------------|
61
- | LLaMA2-Alpaca | 7B |65.85| 16.44 | 56.53 | 4.46 | 56.76 | 9.01 | 34.96 | 1.03 | 44.10 | 2.18 | 40.67 | 0.63 | 45.69 | 1.73 |
62
- | LLaMA2-Alpaca | 13B | 68.72| 19.69 | 64.46| 8.80| 62.86| 12.57| 38.88| 2.16| 52.08| 4.48| 41.18| 0.87| 48.47| 2.51|
63
- | LLaMAX2-Alpaca| 7B | 80.55 | 30.63 | 75.52 | 13.53 | 74.47 | 19.26 | 67.36 | 15.47 | 75.40 | 15.32 | 72.03 | 10.27 | 65.05| 16.11|
64
 
65
 
66
  ### 🔥 Effective Base Model for Multilingual Task
@@ -68,11 +67,11 @@ LLaMAX2-Alpaca achieves an average spBLEU score improvement of over **10 points*
68
  LLaMAX preserves its efficacy in general tasks and improves the performance on multilingual tasks.
69
  We fine-tuned LLaMAX using only the English training set of downstream task, which also shows significant improvements in non-English. We provide fine-tuning LLaMAX models for the following three tasks:
70
 
71
- Math Reasoning: https://huggingface.co/LLaMAX/LLaMAX2-7B-MetaMath
72
 
73
- Commonsense Reasoning: https://huggingface.co/LLaMAX/LLaMAX2-7B-X-CSQA
74
 
75
- Natural Language Inference: https://huggingface.co/LLaMAX/LLaMAX2-7B-XNLI
76
 
77
  ### Supported Languages
78
  Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
 
1
  ### Model Sources
2
 
3
+ - **Paper**: "LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages"
4
 
5
+ - **Link**:
6
 
7
+ - **Repository**: https://github.com/CONE-MT/LLaMAX/
8
 
9
  ### Model Description
10
 
 
17
  LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
18
 
19
  ```angular2html
20
+ def Prompt_template(query, src_language, trg_language):
21
  instruction = f'Translate the following sentences from {src_language} to {trg_language}.'
22
  prompt = (
23
  'Below is an instruction that describes a task, paired with an input that provides further context. '
 
46
 
47
 
48
  ### 🔥 Excellent Translation Performance
49
+ LLaMAX achieves an average spBLEU score improvement of over **10 points** compared to the LLaMA2-Alpaca model on the Flores-101 dataset.
50
 
51
+ | System | Size | en-X (COMET) | en-X (BLEU) | zh-X (COMET)| zh-X (BLEU) | de-X (COMET) | de-X (BLEU) | ne-X (COMET) | ne-X (BLEU) |ar-X (COMET) | ar-X (BLEU) | az-X (COMET) | az-X (BLEU) | ceb-X (COMET) | ceb-X (BLEU)|
52
+ |--------------------|------|--------------------|-------------| ----| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
53
+ | LLaMAX2-7B-Alpaca | 7B | 52.83 | 9.44 | 51.29 | 3.80 | 51.47 | 6.82 | 46.59 | 1.31 | 46.76 | 2.84 | 48.63 | 1.36 | 41.02 | 2.69 |
54
+ | LLaMAX2-7B-Alpaca | 13B | 57.16 | 11.85 | 53.93 | 6.25 | 54.70 | 9.42 | 51.47 | 3.11 | 50.73 | 5.23 | 50.68 | 2.74 | 47.86 | 4.96 |
55
+ | LLaMAX2-7B-Alpaca | 7B | 76.66 | 23.17 | 73.54 | 14.17 | 73.82 | 18.96 | 74.64 | 14.49 | 72.00 | 15.82 | 70.91 | 11.34 | 68.67 | 15.53 |
 
 
56
 
57
 
58
  | System | Size | X-en (COMET) | X-en (BLEU) | X-zh (COMET)| X-zh (BLEU) | X-de (COMET) | X-de (BLEU) | X-ne (COMET) | X-ne (BLEU) |X-ar (COMET) | X-ar (BLEU) | X-az (COMET) | X-az (BLEU) | X-ceb (COMET) | X-ceb (BLEU) |
59
  |---------------|------|----------------|-------------| ----| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |--------------|
60
+ | LLaMAX2-7B-Alpaca | 7B |65.85| 16.44 | 56.53 | 4.46 | 56.76 | 9.01 | 34.96 | 1.03 | 44.10 | 2.18 | 40.67 | 0.63 | 45.69 | 1.73 |
61
+ | LLaMAX2-7B-Alpaca | 13B | 68.72| 19.69 | 64.46| 8.80| 62.86| 12.57| 38.88| 2.16| 52.08| 4.48| 41.18| 0.87| 48.47| 2.51|
62
+ | LLaMAX2-7B-Alpaca| 7B | 80.55 | 30.63 | 75.52 | 13.53 | 74.47 | 19.26 | 67.36 | 15.47 | 75.40 | 15.32 | 72.03 | 10.27 | 65.05| 16.11|
63
 
64
 
65
  ### 🔥 Effective Base Model for Multilingual Task
 
67
  LLaMAX preserves its efficacy in general tasks and improves the performance on multilingual tasks.
68
  We fine-tuned LLaMAX using only the English training set of downstream task, which also shows significant improvements in non-English. We provide fine-tuning LLaMAX models for the following three tasks:
69
 
70
+ - **Math Reasoning**: https://huggingface.co/LLaMAX/LLaMAX2-7B-MetaMath
71
 
72
+ - **Commonsense Reasoning**: https://huggingface.co/LLaMAX/LLaMAX2-7B-X-CSQA
73
 
74
+ - **Natural Language Inference**: https://huggingface.co/LLaMAX/LLaMAX2-7B-XNLI
75
 
76
  ### Supported Languages
77
  Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)