Datasets:
ArXiv:
DOI:
License:
hughesthe1st
commited on
Commit
•
af5c7d6
1
Parent(s):
7dcd4bc
Added link to May 4th announcement
Browse files
README.md
CHANGED
@@ -124,7 +124,7 @@ On [March 20, 2022 ](https://twitter.com/BigCodeProject/status/16378747056455843
|
|
124 |
|
125 |
On [April 13, 2023](https://twitter.com/harmdevries77/status/1646524056538316805?s=20) Inspired by discussions in the training working group, Harm de Vries shared an analysis of Chinchilla scaling laws on how much additional compute resources are needed to create smaller LLMs. These insights suggest we have not reached the limit of training smaller models on more tokens - an important consideration for future research.
|
126 |
|
127 |
-
On May 4, 2023 BigCode announced StarCoder and StarCoderBase, two code LLMs trained on permissively licensed data from GitHub, including from 80+ programming languages, git commits, GitHub issues, and Jupyter notebooks. Similar to [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), StarCoderBase is a ~15B parameter model trained on 1 trillion tokens. On top of StarCoderBase a variant called StarCoder is trained for 35B additional tokens purely on Python.
|
128 |
|
129 |
|
130 |
### Supporting Resources and Funding
|
|
|
124 |
|
125 |
On [April 13, 2023](https://twitter.com/harmdevries77/status/1646524056538316805?s=20) Inspired by discussions in the training working group, Harm de Vries shared an analysis of Chinchilla scaling laws on how much additional compute resources are needed to create smaller LLMs. These insights suggest we have not reached the limit of training smaller models on more tokens - an important consideration for future research.
|
126 |
|
127 |
+
On [May 4, 2023](https://twitter.com/BigCodeProject/status/1654174941976068119?s=20) BigCode announced StarCoder and StarCoderBase, two code LLMs trained on permissively licensed data from GitHub, including from 80+ programming languages, git commits, GitHub issues, and Jupyter notebooks. Similar to [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), StarCoderBase is a ~15B parameter model trained on 1 trillion tokens. On top of StarCoderBase a variant called StarCoder is trained for 35B additional tokens purely on Python.
|
128 |
|
129 |
|
130 |
### Supporting Resources and Funding
|