reshinthadith
commited on
Commit
•
414e73c
1
Parent(s):
bc83af7
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- bigcode/starcoderdata
|
4 |
+
language:
|
5 |
+
- code
|
6 |
+
tags:
|
7 |
+
- causal-lm
|
8 |
+
license: cc-by-sa-4.0
|
9 |
+
---
|
10 |
+
# `StableCode-Completion-Alpha-3B`
|
11 |
+
|
12 |
+
## Model Description
|
13 |
+
|
14 |
+
`StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
|
15 |
+
|
16 |
+
## Usage
|
17 |
+
The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
|
18 |
+
Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet:
|
19 |
+
|
20 |
+
```python
|
21 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
22 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k")
|
23 |
+
model = AutoModelForCausalLM.from_pretrained(
|
24 |
+
"stabilityai/stablecode-completion-alpha-3b-4k",
|
25 |
+
trust_remote_code=True,
|
26 |
+
torch_dtype="auto",
|
27 |
+
)
|
28 |
+
model.cuda()
|
29 |
+
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
|
30 |
+
tokens = model.generate(
|
31 |
+
**inputs,
|
32 |
+
max_new_tokens=48,
|
33 |
+
temperature=0.2,
|
34 |
+
do_sample=True,
|
35 |
+
)
|
36 |
+
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
37 |
+
```
|
38 |
+
|
39 |
+
## Model Details
|
40 |
+
|
41 |
+
* **Developed by**: [Stability AI](https://stability.ai/)
|
42 |
+
* **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
|
43 |
+
* **Language(s)**: Code
|
44 |
+
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
45 |
+
* **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
|
46 |
+
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
|
47 |
+
|
48 |
+
### Model Architecture
|
49 |
+
|
50 |
+
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|
51 |
+
|----------------|-------------|--------|-------|-----------------|
|
52 |
+
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
|
53 |
+
|
54 |
+
|
55 |
+
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
|
56 |
+
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
|
57 |
+
* **Bias**: LayerNorm bias terms only
|
58 |
+
|
59 |
+
## Training
|
60 |
+
|
61 |
+
`StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`.
|
62 |
+
|
63 |
+
### Training Dataset
|
64 |
+
|
65 |
+
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
|
66 |
+
|
67 |
+
### Training Procedure
|
68 |
+
|
69 |
+
The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k.
|
70 |
+
|
71 |
+
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
|
72 |
+
|
73 |
+
## Use and Limitations
|
74 |
+
|
75 |
+
### Intended Use
|
76 |
+
|
77 |
+
|
78 |
+
### Limitations and bias
|