ncoop57 commited on
Commit
fdc3e2e
1 Parent(s): 494b1bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -11,7 +11,7 @@ license: cc-by-sa-4.0
11
 
12
  ## Model Description
13
 
14
- `StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
15
 
16
  ## Usage
17
  The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
@@ -21,7 +21,7 @@ Get started generating code with `StableCode-Completion-Alpha-3B` by using the f
21
  from transformers import AutoModelForCausalLM, AutoTokenizer
22
  tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
23
  model = AutoModelForCausalLM.from_pretrained(
24
- "stabilityai/stablelm-base-alpha-3b-v2",
25
  trust_remote_code=True,
26
  torch_dtype="auto",
27
  )
@@ -74,6 +74,8 @@ The model is pre-trained on the dataset mixes mentioned above in mixed-precision
74
 
75
  ### Intended Use
76
 
 
77
 
78
  ### Limitations and bias
79
 
 
 
11
 
12
  ## Model Description
13
 
14
+ `StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that were the top used languages based on the 2023 stackoverflow developer survey.
15
 
16
  ## Usage
17
  The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
 
21
  from transformers import AutoModelForCausalLM, AutoTokenizer
22
  tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
23
  model = AutoModelForCausalLM.from_pretrained(
24
+ "stabilityai/stablecode-completion-alpha-3b",
25
  trust_remote_code=True,
26
  torch_dtype="auto",
27
  )
 
74
 
75
  ### Intended Use
76
 
77
+ These models are intended to be used by developers and researchers as foundational models for application-specific fine-tuning.
78
 
79
  ### Limitations and bias
80
 
81
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models for any applications that may cause harm or distress to individuals or groups.