ncoop57's picture
Update README.md
fdc3e2e
|
raw
history blame
No virus
4.46 kB
metadata
datasets:
  - bigcode/starcoderdata
language:
  - code
tags:
  - causal-lm
license: cc-by-sa-4.0

StableCode-Completion-Alpha-3B

Model Description

StableCode-Completion-Alpha-3B is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that were the top used languages based on the 2023 stackoverflow developer survey.

Usage

The model is intended to do single/multiline code completion from a long context window upto 16k tokens. Get started generating code with StableCode-Completion-Alpha-3B by using the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/stablecode-completion-alpha-3b",
  trust_remote_code=True,
  torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
  **inputs,
  max_new_tokens=48,
  temperature=0.2,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))

Model Details

  • Developed by: Stability AI
  • Model type: StableCode-Completion-Alpha-3B models are auto-regressive language models based on the transformer decoder architecture.
  • Language(s): Code
  • Library: GPT-NeoX
  • License: Model checkpoints are licensed under the Creative Commons license (CC BY-SA-4.0). Under this license, you must give credit to Stability AI, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
  • Contact: For questions and comments about the model, please email lm@stability.ai

Model Architecture

Parameters Hidden Size Layers Heads Sequence Length
2,796,431,360 2560 32 32 16384
  • Decoder Layer: Parallel Attention and MLP residuals with a single input LayerNorm (Wang & Komatsuzaki, 2021)
  • Position Embeddings: Rotary Position Embeddings (Su et al., 2021)
  • Bias: LayerNorm bias terms only

Training

StableCode-Completion-Alpha-3B is pre-trained using a multi-stage context length extension schedule following similar work (Nijkamp et al. 2023); first pre-training at a context length of 4096 for 300 billion tokens, then fine-tuning at a context length of 16384 for another 200B tokens.

Training Dataset

The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the starcoder-data dataset. We then finetune it on a longer context augmentation of starcoder-data dataset which increased the average token per sample to 20k.

Training Procedure

The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 49k.

Use and Limitations

Intended Use

These models are intended to be used by developers and researchers as foundational models for application-specific fine-tuning.

Limitations and bias

The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models for any applications that may cause harm or distress to individuals or groups.