Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
YunzheLv commited on
Commit
68def75
1 Parent(s): 80169fb

Create README.md (#1)

Browse files

- Create README.md (829cbec2b255f60ad1dcec4289b8df99ef0ce141)

Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - tatsu-lab/alpaca
5
+ - news_commentary
6
+ language:
7
+ - ar
8
+ - el
9
+ - hi
10
+ - tr
11
+ - vi
12
+ - zh
13
+ - en
14
+ metrics:
15
+ - bleu
16
+ - bleurt
17
+ - comet
18
+ pipeline_tag: text-generation
19
+ ---
20
+ # Extrapolating Large Language Models to Non-English by Aligning Languages
21
+
22
+ This repository contains the code implementation for the project that aims to empower pre-trained Large Language Models (LLMs) on non-English languages by building semantic alignment across languages. The project explores cross-lingual instruction-tuning and multilingual instruction-tuning techniques. The code implementation is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
23
+
24
+ ![](./xllama.jpg)
25
+
26
+ ## Requirements and Installation
27
+ To install this repository, follow these steps:
28
+ ```
29
+ git clone git@github.com:NJUNLP/x-LLM.git
30
+ cd x-LLM
31
+ pip install --editable ./
32
+ ```
33
+
34
+ For detailed information about the conda environment, refer to the environment.yml file.
35
+
36
+ ## Usage
37
+ ### Download Pre-trained LLM
38
+ Start by downloading the pre-trained LLM into the ./model directory.
39
+
40
+ ### Download Dataset
41
+ You can download all the datasets used in this project from this [link](https://drive.google.com/file/d/1bkejieKDJFDJ45UmQYiY4eeqpGBwj-r-/view?usp=drive_link). Once downloaded, place the datasets in the ./data directory. The datasets include:
42
+
43
+ * Training dataset
44
+ * Alpaca
45
+ * Wikimatrix
46
+ * Newscommentary
47
+ * Evaluation dataset
48
+ * XQUAD
49
+ * MLQA
50
+ * Flores-101
51
+ * MI-Eval
52
+
53
+ ### Load Raw Data Along with Instruction
54
+ You can load raw data along with instruction using the provided scripts (./data/<dataset>/<dataset.py>). If you want to use a new dataset, you need to implement the corresponding script. The loaded data will have the following structure:
55
+ ``` python
56
+ datasets.Features(
57
+ {
58
+ "id": datasets.Value("string"),
59
+ "instruction": datasets.Value("string"),
60
+ "input": datasets.Value("string"),
61
+ "output": datasets.Value("string")
62
+ }
63
+ )
64
+ ```
65
+
66
+ ## Instruction-tune Pre-trained LLM
67
+ To instruction-tune the pre-trained LLM, run the train.sh script. For example, you can instruction-tune LLaMA-7B to x-LLaMA-7B (Chinese) with the following command:
68
+ ``` bash
69
+ bash script/train.sh llama-7b-hf alpaca_en+alpaca_zh+translation_ncwm_en-zh
70
+ ```
71
+ In this command, the first argument denotes the pre-trained LLM to use, and the second argument represents the training data to use. You can use + to concatenate multiple datasets, and the training data will be shuffled by the Huggingface Trainer.
72
+
73
+ Once the training is complete, the finetuned LLM will be saved in ./model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune. You can use aliases to define shorter names, and more details can be found in ./data/alias/alias.json.
74
+
75
+ ## Test Finetuned LLM
76
+ To test the finetuned LLM, run the inference.sh script. For example, you can test the tuned LLM on the Flores dataset with the following command:
77
+ ``` bash
78
+ bash script/inference.sh llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune translation_flores_en-zh
79
+ ```
80
+ The output results will be saved in model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune/test/translation_flores_en-zh.inference.jsonl. The prediction field represents the generated content of the LLM.
81
+
82
+ ## Interact with LLM Through Web UI
83
+ To interact with the LLM through a web UI, run app.py with the following command:
84
+ ``` bash
85
+ bash app.py model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune
86
+ ```
87
+
88
+ ## Citation
89
+ If you find this repository helpful, please consider citing our paper:
90
+ ```
91
+ @misc{zhu2023extrapolating,
92
+ title={Extrapolating Large Language Models to Non-English by Aligning Languages},
93
+ author={Wenhao Zhu and Yunzhe Lv and Qingxiu Dong and Fei Yuan and Jingjing Xu and Shujian Huang and Lingpeng Kong and Jiajun Chen and Lei Li},
94
+ year={2023},
95
+ eprint={2308.04948},
96
+ archivePrefix={arXiv},
97
+ primaryClass={cs.CL}
98
+ }
99
+ ```