s3nh commited on
Commit
3eb006d
1 Parent(s): 2d34ffc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+
10
+ ## Original model card
11
+
12
+ Buy me a coffee if you like this project ;)
13
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
14
+
15
+ #### Description
16
+
17
+ GPTQ Format model files for [This project](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b-4k/edit/main/README.md).
18
+
19
+
20
+ ### inference
21
+
22
+
23
+ # `StableCode-Completion-Alpha-3B-4K`
24
+
25
+ ## Model Description
26
+
27
+ `StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
28
+
29
+ ## Usage
30
+ The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
31
+ Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet:
32
+
33
+ ```python
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k")
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ "stabilityai/stablecode-completion-alpha-3b-4k",
38
+ trust_remote_code=True,
39
+ torch_dtype="auto",
40
+ )
41
+ model.cuda()
42
+ inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
43
+ tokens = model.generate(
44
+ **inputs,
45
+ max_new_tokens=48,
46
+ temperature=0.2,
47
+ do_sample=True,
48
+ )
49
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
50
+ ```
51
+
52
+ ## Model Details
53
+
54
+ * **Developed by**: [Stability AI](https://stability.ai/)
55
+ * **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
56
+ * **Language(s)**: Code
57
+ * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
58
+ * **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
59
+ * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
60
+
61
+ ### Model Architecture
62
+
63
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
64
+ |----------------|-------------|--------|-------|-----------------|
65
+ | 2,796,431,360 | 2560 | 32 | 32 | 4096 |
66
+
67
+
68
+ * **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
69
+ * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
70
+ * **Bias**: LayerNorm bias terms only
71
+
72
+ ## Training
73
+
74
+ `StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`.
75
+
76
+ ### Training Dataset
77
+
78
+ The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
79
+
80
+ ### Training Procedure
81
+
82
+ The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k.
83
+
84
+ * **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
85
+
86
+ ## Use and Limitations
87
+
88
+ ### Intended Use
89
+
90
+ StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
91
+
92
+ ### Limitations and bias
93
+
94
+ This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
95
+
96
+ ## How to cite
97
+
98
+ ```bibtex
99
+ @misc{StableCodeCompleteAlpha4K,
100
+ url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
101
+ title={Stable Code Complete Alpha},
102
+ author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
103
+ }
104
+ ```