Added missing info about context window
Browse files2k as per
@harmdevries
https://bigcode-workspace.slack.com/archives/C04LH3FC5CG/p1683561032952279?thread_ts=1683560625.981969&cid=C04LH3FC5CG
README.md
CHANGED
@@ -186,7 +186,7 @@ Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces
|
|
186 |
# Model Summary
|
187 |
|
188 |
The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
|
189 |
-
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
|
190 |
In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
|
191 |
|
192 |
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
|
|
|
186 |
# Model Summary
|
187 |
|
188 |
The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
|
189 |
+
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), a context window of 2k tokens, and was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
|
190 |
In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
|
191 |
|
192 |
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
|