GGUF
RichardErkhov commited on
Commit
fb40a5f
1 Parent(s): 5c7f8d0

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +569 -0
README.md ADDED
@@ -0,0 +1,569 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ bloom-1b7 - GGUF
11
+ - Model creator: https://huggingface.co/bigscience/
12
+ - Original model: https://huggingface.co/bigscience/bloom-1b7/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [bloom-1b7.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q2_K.gguf) | Q2_K | 0.98GB |
18
+ | [bloom-1b7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_XS.gguf) | IQ3_XS | 1.08GB |
19
+ | [bloom-1b7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_S.gguf) | IQ3_S | 1.1GB |
20
+ | [bloom-1b7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_S.gguf) | Q3_K_S | 1.1GB |
21
+ | [bloom-1b7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_M.gguf) | IQ3_M | 1.15GB |
22
+ | [bloom-1b7.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K.gguf) | Q3_K | 1.2GB |
23
+ | [bloom-1b7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_M.gguf) | Q3_K_M | 1.2GB |
24
+ | [bloom-1b7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_L.gguf) | Q3_K_L | 1.25GB |
25
+ | [bloom-1b7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_XS.gguf) | IQ4_XS | 1.27GB |
26
+ | [bloom-1b7.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_0.gguf) | Q4_0 | 1.31GB |
27
+ | [bloom-1b7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_NL.gguf) | IQ4_NL | 1.31GB |
28
+ | [bloom-1b7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_S.gguf) | Q4_K_S | 1.31GB |
29
+ | [bloom-1b7.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K.gguf) | Q4_K | 1.39GB |
30
+ | [bloom-1b7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_M.gguf) | Q4_K_M | 1.39GB |
31
+ | [bloom-1b7.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_1.gguf) | Q4_1 | 1.41GB |
32
+ | [bloom-1b7.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_0.gguf) | Q5_0 | 1.51GB |
33
+ | [bloom-1b7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_S.gguf) | Q5_K_S | 1.51GB |
34
+ | [bloom-1b7.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K.gguf) | Q5_K | 1.57GB |
35
+ | [bloom-1b7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_M.gguf) | Q5_K_M | 1.57GB |
36
+ | [bloom-1b7.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_1.gguf) | Q5_1 | 1.61GB |
37
+ | [bloom-1b7.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q6_K.gguf) | Q6_K | 1.72GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: bigscience-bloom-rail-1.0
45
+ language:
46
+ - ak
47
+ - ar
48
+ - as
49
+ - bm
50
+ - bn
51
+ - ca
52
+ - code
53
+ - en
54
+ - es
55
+ - eu
56
+ - fon
57
+ - fr
58
+ - gu
59
+ - hi
60
+ - id
61
+ - ig
62
+ - ki
63
+ - kn
64
+ - lg
65
+ - ln
66
+ - ml
67
+ - mr
68
+ - ne
69
+ - nso
70
+ - ny
71
+ - or
72
+ - pa
73
+ - pt
74
+ - rn
75
+ - rw
76
+ - sn
77
+ - st
78
+ - sw
79
+ - ta
80
+ - te
81
+ - tn
82
+ - ts
83
+ - tum
84
+ - tw
85
+ - ur
86
+ - vi
87
+ - wo
88
+ - xh
89
+ - yo
90
+ - zh
91
+ - zhs
92
+ - zht
93
+ - zu
94
+ pipeline_tag: text-generation
95
+ ---
96
+
97
+ <h1 style='text-align: center '>BLOOM LM</h1>
98
+ <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
99
+ <h3 style='text-align: center '>Model Card</h3>
100
+ <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
101
+
102
+
103
+ Version 1.0 / 26.May.2022
104
+
105
+
106
+ # Model Card for Bloom-1b7
107
+
108
+ <!-- Provide a quick summary of what the model is/does. -->
109
+
110
+ ## Table of Contents
111
+ 1. [Model Details](#model-details)
112
+ 2. [Uses](#uses)
113
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
114
+ 4. [Recommendations](#recommendations)
115
+ 5. [Training Data](#training-data)
116
+ 6. [Evaluation](#evaluation)
117
+ 7. [Environmental Impact](#environmental-impact)
118
+ 8. [Technical Specifications](#techincal-specifications)
119
+ 9. [Citation](#citation)
120
+ 10. [Glossary and Calculations](#glossary-and-calculations)
121
+ 11. [More Information](#more-information)
122
+ 12. [Model Card Authors](#model-card-authors)
123
+ 13. [Model Card Contact](#model-card-contact)
124
+
125
+ ## Model Details
126
+
127
+ ### Model Description
128
+ *This section provides information for anyone who wants to know about the model.*
129
+
130
+ - **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
131
+
132
+ * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
133
+
134
+ - **Model Type:** Transformer-based Language Model
135
+ - **Version:** 1.0.0
136
+ - **Languages:** Multiple; see [training data](#training-data)
137
+ - **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
138
+ - **Release Date Estimate:** Monday, 11.July.2022
139
+ - **Funded by:**
140
+
141
+ * The French government.
142
+
143
+ * Hugging Face ([website](https://huggingface.co)).
144
+
145
+ * Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
146
+
147
+ ## Uses
148
+
149
+ *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
150
+ It provides information for anyone considering using the model or who is affected by the model.*
151
+
152
+ ### Intended Use
153
+
154
+ This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
155
+
156
+ #### **Direct Use**
157
+
158
+ - Text generation
159
+
160
+ - Exploring characteristics of language generated by a language model
161
+
162
+ - Examples: Cloze tests, counterfactuals, generations with reframings
163
+
164
+ #### **Downstream Use**
165
+
166
+ - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
167
+
168
+ ### Misuse and Out-of-scope Use
169
+ *This section addresses what users ought not do with the model.*
170
+
171
+ See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
172
+
173
+ #### **Out-of-scope Uses**
174
+
175
+ Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
176
+
177
+ ##### Out-of-scope Uses Include:
178
+
179
+ - Usage in biomedical domains, political and legal domains, or finance domains
180
+
181
+ - Usage for evaluating or scoring individuals, such as for employment, education, or credit
182
+
183
+ - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
184
+
185
+ #### **Misuse**
186
+
187
+ Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
188
+
189
+ - Spam generation
190
+
191
+ - Disinformation and influence operations
192
+
193
+ - Disparagement and defamation
194
+
195
+ - Harassment and abuse
196
+
197
+ - [Deception](#deception)
198
+
199
+ - Unconsented impersonation and imitation
200
+
201
+ - Unconsented surveillance
202
+
203
+ - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
204
+
205
+ ### Intended Users
206
+
207
+ #### **Direct Users**
208
+
209
+ - General Public
210
+
211
+ - Researchers
212
+
213
+ - Students
214
+
215
+ - Educators
216
+
217
+ - Engineers/developers
218
+
219
+ - Non-commercial entities
220
+
221
+ - Community advocates, including human and civil rights groups
222
+
223
+ #### Indirect Users
224
+
225
+ - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
226
+
227
+ - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
228
+
229
+ #### Others Affected (Parties Prenantes)
230
+
231
+ - People and groups referred to by the LLM
232
+
233
+ - People and groups exposed to outputs of, or decisions based on, the LLM
234
+
235
+ - People and groups whose original work is included in the LLM
236
+
237
+
238
+
239
+ ## Bias, Risks, and Limitations
240
+ *This section identifies foreseeable harms and misunderstandings.*
241
+
242
+ Model may:
243
+
244
+ - Overrepresent some viewpoints and underrepresent others
245
+
246
+ - Contain stereotypes
247
+
248
+ - Contain [personal information](#personal-data-and-information)
249
+
250
+ - Generate:
251
+
252
+ - Hateful, abusive, or violent language
253
+
254
+ - Discriminatory or prejudicial language
255
+
256
+ - Content that may not be appropriate for all settings, including sexual content
257
+
258
+ - Make errors, including producing incorrect information as if it were factual
259
+
260
+ - Generate irrelevant or repetitive outputs
261
+
262
+
263
+ ### Recommendations
264
+
265
+
266
+ *This section provides information on warnings and potential mitigations.*
267
+
268
+ - Indirect users should be made aware when the content they're working with is created by the LLM.
269
+
270
+ - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
271
+
272
+ - Models pretrained with the LLM should include an updated Model Card.
273
+
274
+ - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
275
+
276
+
277
+
278
+
279
+ ## Training Data
280
+ *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
281
+
282
+
283
+
284
+
285
+ Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
286
+
287
+ Training data includes:
288
+
289
+ - 45 natural languages
290
+
291
+ - 12 programming languages
292
+
293
+ - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
294
+
295
+
296
+ #### **Languages**
297
+
298
+ The pie chart shows the distribution of languages in training data.
299
+
300
+ ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true)
301
+
302
+
303
+ **The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
304
+
305
+
306
+ | Niger Congo | Percentage | | Indic | Percentage |
307
+ |----------------|------------ |------ |-----------|------------|
308
+ | Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
309
+ | Kikuyu | 0.00004 | | Odia | 0.04 |
310
+ | Bambara | 0.00004 | | Gujarati | 0.04 |
311
+ | Akan | 0.00007 | | Marathi | 0.05 |
312
+ | Xitsonga | 0.00007 | | Punjabi | 0.05 |
313
+ | Sesotho | 0.00007 | | Kannada | 0.06 |
314
+ | Chi Chewa | 0.0001 | | Nepali | 0.07 |
315
+ | Setswana | 0.0002 | | Telugu | 0.09 |
316
+ | Northern Sotho | 0.0002 | | Malayalam | 0.10 |
317
+ | Fon | 0.0002 | | Urdu | 0.10 |
318
+ | Kirundi | 0.0003 | | Tamil | 0.20 |
319
+ | Wolof | 0.0004 | | Bengali | 0.50 |
320
+ | Kuganda | 0.0004 | | Hindi | 0.70 |
321
+ | Chi Shona | 0.001 |
322
+ | Isi Zulu | 0.001 |
323
+ | Igbo | 0.001 |
324
+ | Xhosa | 0.001 |
325
+ | Kinyarwanda | 0.003 |
326
+ | Yoruba | 0.006 |
327
+ | Swahili | 0.02 |
328
+ </details>
329
+
330
+ **The following table shows the distribution of programming languages.**
331
+
332
+
333
+ | Extension | Language | Number of files |
334
+ |----------------|------------|-----------------|
335
+ | java | Java | 5,407,724 |
336
+ | php | PHP | 4,942,186 |
337
+ | cpp | C++ | 2,503,930 |
338
+ | py | Python | 2,435,072 |
339
+ | js | JavaScript | 1,905,518 |
340
+ | cs | C# | 1,577,347 |
341
+ | rb | Ruby | 6,78,413 |
342
+ | cc | C++ | 443,054 |
343
+ | hpp | C++ | 391,048 |
344
+ | lua | Lua | 352,317 |
345
+ | go | GO | 227,763 |
346
+ | ts | TypeScript | 195,254 |
347
+ | C | C | 134,537 |
348
+ | scala | Scala | 92,052 |
349
+ | hh | C++ | 67,161 |
350
+ | H | C++ | 55,899 |
351
+ | tsx | TypeScript | 33,107 |
352
+ | rs | Rust | 29,693 |
353
+ | phpt | PHP | 9,702 |
354
+ | c++ | C++ | 1,342 |
355
+ | h++ | C++ | 791 |
356
+ | php3 | PHP | 540 |
357
+ | phps | PHP | 270 |
358
+ | php5 | PHP | 166 |
359
+ | php4 | PHP | 29 |
360
+
361
+
362
+ ## Evaluation
363
+ *This section describes the evaluation protocols and provides the results.*
364
+
365
+
366
+ ### Metrics
367
+ *This section describes the different ways performance is calculated and why.*
368
+
369
+ Includes:
370
+
371
+ | Metric | Why chosen |
372
+ |--------------------|--------------------------------------------------------------------|
373
+ | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
374
+ | Cross Entropy [Loss](#loss) | Standard objective for language models. |
375
+
376
+ And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
377
+
378
+ ### Factors
379
+ *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
380
+
381
+ - Language, such as English or Yoruba
382
+
383
+ - Domain, such as newswire or stories
384
+
385
+ - Demographic characteristics, such as gender or nationality
386
+
387
+ ### Results
388
+ *Results are based on the [Factors](#factors) and [Metrics](#metrics).*
389
+
390
+ **Train-time Evaluation:**
391
+
392
+ As of 25.May.2022, 15:00 PST:
393
+
394
+ - Training Loss: 2.0
395
+
396
+ - Validation Loss: 2.2
397
+
398
+ - Perplexity: 8.9
399
+
400
+ (More evaluation scores forthcoming at the end of model training.)
401
+
402
+ - [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
403
+
404
+
405
+
406
+ ## Environmental Impact
407
+
408
+ The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
409
+
410
+ **Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
411
+
412
+ **Estimated electricity usage:** *(Forthcoming upon completion of training.)*
413
+
414
+
415
+
416
+ ## Technical Specifications
417
+ *This section provides information for people who work on model development.*
418
+
419
+
420
+ Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
421
+
422
+ **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
423
+
424
+ * Decoder-only architecture
425
+
426
+ * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
427
+
428
+ * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
429
+
430
+ * 1,722,408,960 parameters:
431
+
432
+ * 513,802,240 embedding parameters
433
+
434
+ * 24 layers, 16 attention heads
435
+
436
+ * Hidden layers are 2048-dimensional
437
+
438
+ * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
439
+
440
+ **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
441
+
442
+ **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
443
+
444
+ * Hardware: 64 V100 16/32GB GPUs (16 nodes):
445
+
446
+ * 4 GPUs per node
447
+
448
+ * 40 CPUs per task
449
+
450
+ * 1 task per node
451
+
452
+ * CPU: AMD
453
+
454
+ * CPU memory: 160GB per node
455
+
456
+ * GPU memory: 64GB or 128GB (depending on node availability during training) per node
457
+
458
+ * Inter-node connect: Omni-Path Architecture (OPA)
459
+
460
+ * NCCL-communications network: a fully dedicated subnet
461
+
462
+ * Disc IO network: shared network with other types of nodes
463
+
464
+ * Software:
465
+
466
+ * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
467
+
468
+ * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
469
+
470
+ * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
471
+
472
+ * apex ([Github link](https://github.com/NVIDIA/apex))
473
+
474
+ ### **Training**
475
+
476
+ - Checkpoint size:
477
+
478
+ - Fp16 weights: 2.6GB (# params * 2)
479
+
480
+ - Full checkpoint with optimizer states: --
481
+
482
+ - Training throughput: --
483
+
484
+ - Number of epochs: 1
485
+
486
+ - Dates:
487
+
488
+ - Start: 11th March, 2022 11:42am PST
489
+
490
+ - End: 20 May, 2022
491
+
492
+ - Server training location: Île-de-France, France
493
+
494
+ ### **Tokenization**
495
+
496
+ The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
497
+
498
+ - A byte-level Byte Pair Encoding (BPE) algorithm
499
+
500
+ - A simple pre-tokenization rule, no normalization
501
+
502
+ - A vocabulary size of 250,680
503
+
504
+ It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
505
+
506
+
507
+
508
+ ## Citation
509
+
510
+ **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
511
+
512
+ ## Glossary and Calculations
513
+
514
+ *This section defines common terms and how metrics are calculated.*
515
+
516
+ - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
517
+
518
+ - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
519
+
520
+ - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
521
+
522
+ - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
523
+
524
+ - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
525
+
526
+ - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
527
+
528
+ - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
529
+
530
+ - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
531
+
532
+
533
+ ## More Information
534
+
535
+
536
+ ### Dataset Creation
537
+
538
+ Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
539
+
540
+ ### Technical Specifications
541
+
542
+ Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
543
+
544
+ More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
545
+
546
+ Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
547
+
548
+ Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
549
+
550
+ Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
551
+
552
+ Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
553
+
554
+ Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
555
+
556
+ ### Initial Results
557
+
558
+ Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
559
+
560
+ ## Model Card Authors
561
+ *Ordered roughly chronologically and by amount of time spent.*
562
+
563
+ Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
564
+
565
+ ## Model Card Contact
566
+
567
+ **Send Questions to:** bigscience-contact@googlegroups.com
568
+
569
+