madhavatreplit commited on
Commit
6bf2846
1 Parent(s): d565497

Update README.md (#7)

Browse files

- Update README.md (2fbbec2d1f85e87bc853183f6726db4b83819aa6)

Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -1,5 +1,16 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # Replit Code V-1.5 3B
@@ -10,13 +21,16 @@ Developed by: Replit, Inc.
10
 
11
  Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
12
 
13
- The model is trained in `bfloat16` on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup V2 dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup) and a dev-oriented samples from StackExchange.
14
- The context size is 4096 tokens can be extended using techniques on its ALiBi positional embeddings.
15
 
16
- We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens. This custom vocabulary led to single-digit % points on compression while maintaining or improving coverage on our training corpus.
 
 
 
17
 
18
- The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs.
19
 
 
20
 
21
  ## Dependancies
22
  You will need to install the latest versions of the following dependencies:
@@ -85,6 +99,5 @@ Replit intends this model be used by anyone as a foundational model for applicat
85
  The model is trained specifically for code completion tasks.
86
 
87
 
88
-
89
  ## Limitations
90
  The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing and toxicity and profanity filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - bigcode/the-stack-dedup
5
+ - togethercomputer/RedPajama-Data-1T
6
+ tags:
7
+ - code
8
+ - Composer
9
+ - MosaicML
10
+ - llm-foundry
11
+ - StreamingDatasets
12
+ language:
13
+ - code
14
  ---
15
 
16
  # Replit Code V-1.5 3B
 
21
 
22
  Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
23
 
24
+ The model is trained in `bfloat16` on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup), a filtered natural language sample from Markdown and reStructuredText subsets from the same Stack Dedup dataset, and a dev-oriented sample from [RedPajama's StackExchange dataset](https://github.com/togethercomputer/RedPajama-Data) sourced from the [Stack Exchange Data Dump by Stack Exchange Inc](https://archive.org/details/stackexchange).
 
25
 
26
+ The 30 programming languages are:
27
+ ```
28
+ Java, JavaScript, C, PHP, Python, C++, C#, TypeScript, Go, CSS, HTML, Rust, Ruby, Swift, Scala, Shell, Lua, Perl, Haskell, JSX, Julia, Common Lisp, OCaml, Solidity, Scheme, R, Zig, SQL, Racket, D
29
+ ```
30
 
31
+ The context size of the model is 4096 tokens. We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens. This custom vocabulary led to single-digit % points on compression while maintaining or improving coverage on our training corpus.
32
 
33
+ The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs using their [LLM Foundry](https://github.com/mosaicml/llm-foundry) and [Composer](https://github.com/mosaicml/composer) training library built on top of PyTorch.
34
 
35
  ## Dependancies
36
  You will need to install the latest versions of the following dependencies:
 
99
  The model is trained specifically for code completion tasks.
100
 
101
 
 
102
  ## Limitations
103
  The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing and toxicity and profanity filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.