ncoop57 commited on
Commit
edbd0a0
1 Parent(s): 8d12871

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -5,13 +5,13 @@ language:
5
  - code
6
  tags:
7
  - causal-lm
8
- license: cc-by-sa-4.0
9
  ---
10
- # `StableCode-Completion-Alpha-3B`
11
 
12
  ## Model Description
13
 
14
- `StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
15
 
16
  ## Usage
17
  The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
@@ -42,7 +42,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
42
  * **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
43
  * **Language(s)**: Code
44
  * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
45
- * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
46
  * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
47
 
48
  ### Model Architecture
@@ -74,5 +74,8 @@ The model is pre-trained on the dataset mixes mentioned above in mixed-precision
74
 
75
  ### Intended Use
76
 
 
77
 
78
  ### Limitations and bias
 
 
 
5
  - code
6
  tags:
7
  - causal-lm
8
+ license: apache-2.0
9
  ---
10
+ # `StableCode-Completion-Alpha-3B-4K`
11
 
12
  ## Model Description
13
 
14
+ `StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
15
 
16
  ## Usage
17
  The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
 
42
  * **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
43
  * **Language(s)**: Code
44
  * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
45
+ * **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
46
  * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
47
 
48
  ### Model Architecture
 
74
 
75
  ### Intended Use
76
 
77
+ StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
78
 
79
  ### Limitations and bias
80
+
81
+ This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.