reshinthadith
commited on
Commit
•
494b1bf
1
Parent(s):
e9af0ee
Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
38 |
|
39 |
## Model Details
|
40 |
|
41 |
-
* **Developed by**:
|
42 |
* **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
|
43 |
* **Language(s)**: Code
|
44 |
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
@@ -62,7 +62,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
62 |
|
63 |
### Training Dataset
|
64 |
|
65 |
-
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset.
|
66 |
|
67 |
### Training Procedure
|
68 |
|
|
|
38 |
|
39 |
## Model Details
|
40 |
|
41 |
+
* **Developed by**: [Stability AI](https://stability.ai/)
|
42 |
* **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
|
43 |
* **Language(s)**: Code
|
44 |
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
|
|
62 |
|
63 |
### Training Dataset
|
64 |
|
65 |
+
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset which increased the average token per sample to 20k.
|
66 |
|
67 |
### Training Procedure
|
68 |
|