Muennighoff
commited on
Commit
•
e726b36
1
Parent(s):
94a4acd
341btokens (#12)
Browse files- New model at 341b tokens (2e7ba66afa2e7dbf238a0e19c5dc9bd05578ab9c)
- Update README.md (6d57f50e1e2301253bfc3ad3fb7523f598989fc9)
- Update README.md (b353d10b1c4c558e68aec4e32198f2e0db5782bf)
- README.md +5 -9
- pytorch_model.bin +2 -2
README.md
CHANGED
@@ -52,8 +52,6 @@ language:
|
|
52 |
pipeline_tag: text-generation
|
53 |
---
|
54 |
|
55 |
-
# <span style="color:red"><b>WARNING:</b> This is an <b>intermediary checkpoint</b>. It is not fully trained yet. You might want to use [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) if you want a model that has completed training.</span>
|
56 |
-
|
57 |
<h1 style='text-align: center '>BLOOM LM</h1>
|
58 |
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
|
59 |
<h3 style='text-align: center '>Model Card</h3>
|
@@ -456,7 +454,7 @@ Includes:
|
|
456 |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
|
457 |
|
458 |
### Factors
|
459 |
-
*This section lists some different aspects of
|
460 |
|
461 |
- Language, such as English or Yoruba
|
462 |
|
@@ -471,13 +469,11 @@ And multiple different metrics for specific tasks. _(More evaluation metrics for
|
|
471 |
|
472 |
As of 25.May.2022, 15:00 PST:
|
473 |
|
474 |
-
- Training Loss: 2.
|
475 |
-
|
476 |
-
- Validation Loss: 2.2
|
477 |
|
478 |
-
-
|
479 |
|
480 |
-
|
481 |
|
482 |
</details>
|
483 |
<p> </p>
|
@@ -564,5 +560,5 @@ Initial prompting experiments using interim checkpoints: https://huggingface.co/
|
|
564 |
## Model Card Authors
|
565 |
*Ordered roughly chronologically and by amount of time spent.*
|
566 |
|
567 |
-
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay
|
568 |
|
|
|
52 |
pipeline_tag: text-generation
|
53 |
---
|
54 |
|
|
|
|
|
55 |
<h1 style='text-align: center '>BLOOM LM</h1>
|
56 |
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
|
57 |
<h3 style='text-align: center '>Model Card</h3>
|
|
|
454 |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
|
455 |
|
456 |
### Factors
|
457 |
+
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
|
458 |
|
459 |
- Language, such as English or Yoruba
|
460 |
|
|
|
469 |
|
470 |
As of 25.May.2022, 15:00 PST:
|
471 |
|
472 |
+
- Training Loss: 2.3
|
|
|
|
|
473 |
|
474 |
+
- Validation Loss: 2.9
|
475 |
|
476 |
+
- Perplexity: 16
|
477 |
|
478 |
</details>
|
479 |
<p> </p>
|
|
|
560 |
## Model Card Authors
|
561 |
*Ordered roughly chronologically and by amount of time spent.*
|
562 |
|
563 |
+
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
|
564 |
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:179bb023e04beaee344fb6a5bb83765a39d736481f81d1364195fffb3ec0ca81
|
3 |
+
size 14138164543
|