Muennighoff commited on
Commit
44f3fe5
1 Parent(s): fd4a9a8
Files changed (1) hide show
  1. README.md +96 -20
README.md CHANGED
@@ -87,14 +87,12 @@ widget:
87
 
88
  # Table of Contents
89
 
90
- 1. [Model Summary](#model=summary)
91
  2. [Use](#use)
92
- 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
93
- 4. [Training Details](#training-details)
94
  5. [Evaluation](#evaluation)
95
- 6. [Environmental Impact](#environmental-impact)
96
  7. [Citation](#citation)
97
- 9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
98
 
99
  # Model Summary
100
 
@@ -103,6 +101,7 @@ widget:
103
  - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
104
  - **Paper:** [TODO]
105
  - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
 
106
  - **BLOOMZ & mT0 Model Family:**
107
  |Name|Explanation|
108
  |----|-----------|
@@ -129,40 +128,117 @@ widget:
129
  |[mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt)|13B parameter multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [xP3](https://huggingface.co/bigscience/xP3) & [xP3mt](https://huggingface.co/bigscience/xP3mt). **Better than [mt0-xxl](https://huggingface.co/bigscience/mt0-xxl) when prompting in non-English**|
130
  |||
131
  |[mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3)| 13B parameter multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [P3](https://huggingface.co/bigscience/P3). **Released for research purposes, performance is inferior to [mt0-xxl](https://huggingface.co/bigscience/mt0-xxl)**|
132
- |----|-----------|
133
 
 
 
 
 
 
 
 
 
 
134
 
135
- # Intended uses
136
 
137
- You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Translate this to Chinese: Je t'aime."*, and the model will hopefully generate *"我爱你"*.
138
 
139
- # How to use
140
 
141
- Here is how to use the model in PyTorch:
 
142
 
143
- TODO: Better code with auto-precision?
144
  ```python
145
- from transformers import AutoTokenizer, AutoModelForCausalLM
 
146
 
147
- tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
148
- model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
149
 
150
- inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
 
 
 
151
  outputs = model.generate(inputs)
152
  print(tokenizer.decode(outputs[0]))
153
  ```
154
 
155
- To use another checkpoint, replace the path in `AutoTokenizer` and `AutoModelForCausalLM`.
 
 
 
 
 
 
 
 
 
 
 
156
 
157
- **Note: 176B models are trained with bfloat16, while smaller models are trained with fp16. We recommend using the same precision type or fp32 at inference**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
  # Limitations
160
 
161
- - Large model size may require large computational resources
162
- - High performance variance depending on the prompt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
 
164
- # BibTeX entry and citation info
165
 
 
166
  ```bibtex
167
  TODO
168
  ```
87
 
88
  # Table of Contents
89
 
90
+ 1. [Model Summary](#model-summary)
91
  2. [Use](#use)
92
+ 3. [Limitations](#limitations)
93
+ 4. [Training](#training)
94
  5. [Evaluation](#evaluation)
 
95
  7. [Citation](#citation)
 
96
 
97
  # Model Summary
98
 
101
  - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
102
  - **Paper:** [TODO]
103
  - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
104
+ - **Languages:** Refer to [BLOOM](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
105
  - **BLOOMZ & mT0 Model Family:**
106
  |Name|Explanation|
107
  |----|-----------|
128
  |[mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt)|13B parameter multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [xP3](https://huggingface.co/bigscience/xP3) & [xP3mt](https://huggingface.co/bigscience/xP3mt). **Better than [mt0-xxl](https://huggingface.co/bigscience/mt0-xxl) when prompting in non-English**|
129
  |||
130
  |[mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3)| 13B parameter multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [P3](https://huggingface.co/bigscience/P3). **Released for research purposes, performance is inferior to [mt0-xxl](https://huggingface.co/bigscience/mt0-xxl)**|
 
131
 
132
+ # Use
133
+
134
+ ## Intended use
135
+
136
+ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*" Some other prompt ideas from our paper:
137
+ - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
138
+ - Suggest at least five related search terms to "Mạng neural nhân tạo".
139
+ - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
140
+ - Explain in a sentence in Telugu what is backpropagation in neural networks.
141
 
142
+ **Feel free to share your generations in the Community tab!**
143
 
144
+ ## How to use
145
 
146
+ ### CPU
147
 
148
+ <details>
149
+ <summary> Click to expand </summary>
150
 
 
151
  ```python
152
+ # pip install -q transformers
153
+ from transformers import AutoModelForCausalLM, AutoTokenizer
154
 
155
+ checkpoint = "bigscience/bloomz"
 
156
 
157
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
158
+ model = AutoModelForCausalLM.from_pretrained(checkpoint)
159
+
160
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
161
  outputs = model.generate(inputs)
162
  print(tokenizer.decode(outputs[0]))
163
  ```
164
 
165
+ </details>
166
+
167
+ ### GPU
168
+
169
+ <details>
170
+ <summary> Click to expand </summary>
171
+
172
+ ```python
173
+ # pip install -q transformers accelerate
174
+ from transformers import AutoModelForCausalLM, AutoTokenizer
175
+
176
+ checkpoint = "bigscience/bloomz"
177
 
178
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
179
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
180
+
181
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
182
+ outputs = model.generate(inputs)
183
+ print(tokenizer.decode(outputs[0]))
184
+ ```
185
+
186
+ </details>
187
+
188
+ ### GPU in 8bit
189
+
190
+ <details>
191
+ <summary> Click to expand </summary>
192
+
193
+ ```python
194
+ # pip install -q transformers accelerate bitsandbytes
195
+ from transformers import AutoModelForCausalLM, AutoTokenizer
196
+
197
+ checkpoint = "bigscience/bloomz"
198
+
199
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
200
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
201
+
202
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
203
+ outputs = model.generate(inputs)
204
+ print(tokenizer.decode(outputs[0]))
205
+ ```
206
+
207
+ </details>
208
 
209
  # Limitations
210
 
211
+ The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
212
+
213
+ # Training
214
+
215
+ ## Model
216
+
217
+ - Architecture: Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file
218
+ - Finetuning steps: 498
219
+ - Finetuning tokens: 2.09 billion
220
+ - Finetuning layout: 72x pipeline parallel, 1x tensor parallel, 4x data parallel
221
+ - Precision: bfloat16
222
+
223
+ ## Hardware
224
+
225
+ - 288 A100 80GB GPUs (36 nodes)
226
+ - 8 GPUs per node using NVLink 4 inter-gpu connects, 4 OmniPath links
227
+ - NCCL-communications network: a fully dedicated subnet
228
+ - AMD CPUs with 512GB memory per node
229
+
230
+ ## Software
231
+
232
+ - [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
233
+ - [DeepSpeed](https://github.com/microsoft/DeepSpeed))
234
+ - [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
235
+ - [apex](https://github.com/NVIDIA/apex)
236
+
237
+ # Evaluation
238
 
239
+ We refer to Table 7 from our paper [TODO LINK].
240
 
241
+ # Citation
242
  ```bibtex
243
  TODO
244
  ```