jisx commited on
Commit
1f464b8
1 Parent(s): 1152828

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -18,15 +18,14 @@ license: cc-by-nc-4.0
18
  This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
19
  We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
20
 
21
- Please refer to our paper for more details.
22
 
23
- #### Instruction tuning details
24
  * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
25
  * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish, Pashto, Portuguese, Romanian, Russian, Sinhala, Slovenian, Swedish, Swahili, Tamil, Telugu, Thai, Tagalog, Turkish, Ukrainian, Urdu, Vietnamese
26
  * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl, ps, pt, ro, ru, si, sl, sv, sw, ta, te, th, tl, tr, uk, ur, vi
27
  * Training method: full-parameter fine-tuning.
28
 
29
- #### Usage
30
  The model checkpoint should be loaded using `transformers` library.
31
 
32
  ```python
@@ -36,9 +35,15 @@ tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-51")
36
  model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-51")
37
  ```
38
 
39
- #### Citation
40
  ```
41
- @article{
 
 
 
 
 
 
42
  }
43
  ```
44
 
 
18
  This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
19
  We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
20
 
21
+ Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
22
 
 
23
  * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
24
  * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish, Pashto, Portuguese, Romanian, Russian, Sinhala, Slovenian, Swedish, Swahili, Tamil, Telugu, Thai, Tagalog, Turkish, Ukrainian, Urdu, Vietnamese
25
  * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl, ps, pt, ro, ru, si, sl, sv, sw, ta, te, th, tl, tr, uk, ur, vi
26
  * Training method: full-parameter fine-tuning.
27
 
28
+ ### Usage
29
  The model checkpoint should be loaded using `transformers` library.
30
 
31
  ```python
 
35
  model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-51")
36
  ```
37
 
38
+ ### Citation
39
  ```
40
+ @misc{lucky52,
41
+ title = "Lucky 52: How Many Languages Are Needed to Instruction Fine-Tune Large Language Models?",
42
+ author = "Shaoxiong Ji and Pinzhen Chen",
43
+ year = "2024",
44
+ eprint = "2404.04850",
45
+ archiveprefix = "arXiv",
46
+ primaryclass = "cs.CL"
47
  }
48
  ```
49