Text Generation
English
Megatron-LM
nvidia
Retro
InstructRetro
48B
boxin-wbx commited on
Commit
a98a152
1 Parent(s): d073146

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md CHANGED
@@ -3,4 +3,145 @@ license: other
3
  license_name: nv-ai-foundation-models-license
4
  license_link: >-
5
  https://developer.download.nvidia.com/ai-foundation-models/nvidia-ai-foundation-models-license-10Nov2023.pdf
 
 
 
 
 
 
 
 
 
 
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: nv-ai-foundation-models-license
4
  license_link: >-
5
  https://developer.download.nvidia.com/ai-foundation-models/nvidia-ai-foundation-models-license-10Nov2023.pdf
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - nvidia
11
+ - Megatron-LM
12
+ - Retro
13
+ - InstructRetro
14
+ - 48B
15
+ library_name: Megatron-LM
16
  ---
17
+
18
+ # InstructRetro
19
+
20
+ Retro [(Borgeaud et al., 2022)](https://arxiv.org/abs/2112.04426) is an autoregressive decoder-only language model (LM) pretrained with retrieval-augmentation.
21
+ Retro features practical scalibility to support large-scale pretraining from scratch by retrieving from trillions of token.
22
+ Pretraining with retrieval provides a more efficient storage mechanism of factual knowledge, when compared to storing factual knowledge implicitly within the network's parameters, thus largely reducing model parameters while achieving lower perplexity than standard GPT.
23
+ Retro also provides the flexibility to update the
24
+ knowledge stored in LMs [(Wang et al., 2023a)](https://arxiv.org/abs/2304.06762)
25
+ by updating the retrieval database without training LMs again.
26
+
27
+ InstructRetro [(Wang et al., 2023b)](https://arxiv.org/abs/2310.07713) further scales up the size of Retro to 48B, featuring the largest LLM pretrained with retrieval (as of December 2023).
28
+ The obtained foundation model, Retro 48B, largely outperforms the GPT counterpart in terms of perplexity.
29
+ With instruction tuning on Retro, InstructRetro demonstrates significant improvement over the instruction tuned GPT on downstream tasks in the zero-shot setting. Specifically, the average improvement of InstructRetro is 7% over its GPT counterpart across 8 short-form QA tasks, and 10% over GPT across 4 challenging long-form QA tasks. We also find that one can ablate the encoder from InstructRetro architecture and directly use the InstructRetro decoder backbone as GPT, while achieving comparable results.
30
+
31
+ ## Model Overview
32
+
33
+ ### License
34
+
35
+ The use of this model is governed by the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license).
36
+
37
+ ### Supported Hardware
38
+
39
+ - H100
40
+ - A100 80GB, A100 40GB
41
+
42
+ ### Model Version(s)
43
+
44
+ `retro-48b-base-4k`: Pretrained Retro 48B LM **without** instruction tuning.
45
+
46
+ *Using base models without instruction tuning for downstream task evaluation is not recommended.*
47
+
48
+
49
+ ### Toolkit
50
+ [Megatron-LM Framework](https://github.com/NVIDIA/Megatron-LM/tree/InstructRetro)
51
+
52
+
53
+ ## Environment
54
+
55
+ We recommend using docker environment to run the code.
56
+
57
+ ### Docker image
58
+
59
+
60
+ We provide a docker build file in [Dockerfile](https://github.com/NVIDIA/Megatron-LM/blob/InstructRetro/tools/retro/examples/Dockerfile) for the reproduction. The docker image is based on `nvcr.io/nvidia/pytorch:23.09-py3`.
61
+
62
+
63
+ ### Install dependencies
64
+
65
+ Clone the Megatron repo:
66
+
67
+ ```bash
68
+ git clone --branch InstructRetro https://github.com/NVIDIA/Megatron-LM.git
69
+ ```
70
+
71
+ If docker is not available, we recommend starting from a clean conda environment with the following runtime dependencies:
72
+
73
+ - Python 3.10
74
+ - NVIDIA CUDA® 12.2.1
75
+ - NVIDIA cuBLAS 12.2.5.6
76
+ - NVIDIA cuDNN 8.9.5
77
+ - NVIDIA NCCL 2.18.5
78
+ - PyTorch 2.1.0a0+32f93b1
79
+
80
+ Then install Retro-specific dependencies, including:
81
+ ```bash
82
+ pip install -U faiss-gpu
83
+ pip install -U transformers
84
+ pip install -U sentencepiece
85
+ pip install -U h5py
86
+ pip install -U nltk
87
+ pip install -U einops
88
+ ```
89
+
90
+ ## Evaluation Command
91
+
92
+ Download our model checkpoint and tokenizer.
93
+
94
+ Specify the blank args in the [tools/retro/text_generation/retro_generate.sh](https://github.com/NVIDIA/Megatron-LM/blob/InstructRetro/tools/retro/text_generation/retro_generate.sh) script, including model path, Retro workdir, and model related params.
95
+
96
+ | Parameter | Value | Explanation |
97
+ |-----------|-------|-----------------------------------|
98
+ | mod_par | 8 | Tensor parallelism |
99
+ | layers | 48 | Number of layers in the model |
100
+ | hid_dim | 8192 | Hidden dimension size |
101
+ | heads | 64 | Number of attention heads |
102
+ | pip_par | 1 | Pipeline parallelism |
103
+
104
+ We present an example command to run retro generation with the InstructRetro checkpoints for the Natural Question (NQ) task. The example command is for the 48b InstructRetro. Please specify the directory for the NQ dataset and update the command accordingly for other checkpoints.
105
+
106
+ ```
107
+ bash tools/retro/text_generation/retro_generate.sh nq 48b greedy test 0 20000 1000 5 pp1 <path/to/checkpoint> 2
108
+ ```
109
+
110
+ The generated responses will be saved in the corresponding checkpoint directory. For example, for the 48b InstructRetro, it will be saved to
111
+ `<path/to/retro>/retro-generate-nq_5_2_48b_test_greedy_0_20000_1000.txt`.
112
+
113
+ To evaluate the F1 / Exact Match (EM) scores of the generated responses, we provide an example script to run the evaluation on the NQ dataset. Please specify the directory for the NQ dataset and update the command accordingly for other checkpoints and downstream tasks.
114
+
115
+ ```bash
116
+ python3 tools/retro/text_generation/evaluate.py
117
+ ```
118
+
119
+ # Citations
120
+
121
+ See more details from our papers:
122
+
123
+ [Shall we Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study.](https://arxiv.org/abs/2304.06762)
124
+
125
+ _Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro._ (EMNLP 2023)
126
+
127
+ [InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining.](https://arxiv.org/abs/2310.07713)
128
+
129
+ _Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro._
130
+
131
+ Please cite the papers as follows if you use the data or code from this repo:
132
+
133
+ ```bibtex
134
+ @inproceedings{wang2023shall,
135
+ title = {Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study},
136
+ author = {Boxin Wang and Wei Ping and Peng Xu and Lawrence McAfee and Zihan Liu and Mohammad Shoeybi and Yi Dong and Oleksii Kuchaiev and Bo Li and Chaowei Xiao and Anima Anandkumar and Bryan Catanzaro},
137
+ journal = {The 2023 Conference on Empirical Methods in Natural Language Processing},
138
+ year = {2023}
139
+ }
140
+
141
+ @article{wang2023instructretro,
142
+ title = {InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining},
143
+ author = {Boxin Wang and Wei Ping and Lawrence McAfee and Peng Xu and Bo Li and Mohammad Shoeybi and Bryan Catanzaro},
144
+ year = {2023},
145
+ journal = {arXiv preprint arXiv: 2310.07713}
146
+ }
147
+ ```