muhtasham commited on
Commit
c9442e6
1 Parent(s): 9e0590a

Updated Context length precision

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -186,7 +186,7 @@ Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces
186
  # Model Summary
187
 
188
  The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
189
- The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), a context window of 2k tokens, and was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
190
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
191
 
192
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
 
186
  # Model Summary
187
 
188
  The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
189
+ The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), a context window of 2048 tokens, and was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
190
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
191
 
192
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)