Text Generation
Transformers
Safetensors
English
stablelm
causal-lm
code
Eval Results
Inference Endpoints
4-bit precision
awq
7 papers
TechxGenus commited on
Commit
d89187a
1 Parent(s): 4ec83ef

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ datasets:
4
+ - tiiuae/falcon-refinedweb
5
+ - bigcode/the-stack-github-issues
6
+ - bigcode/commitpackft
7
+ - bigcode/starcoderdata
8
+ - EleutherAI/proof-pile-2
9
+ - meta-math/MetaMathQA
10
+ language:
11
+ - en
12
+ tags:
13
+ - causal-lm
14
+ - code
15
+ metrics:
16
+ - code_eval
17
+ library_name: transformers
18
+ model-index:
19
+ - name: stabilityai/stable-code-3b
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ dataset:
24
+ type: nuprl/MultiPL-E
25
+ name: MultiPL-HumanEval (Python)
26
+ metrics:
27
+ - name: pass@1
28
+ type: pass@1
29
+ value: 32.4
30
+ verified: false
31
+ - task:
32
+ type: text-generation
33
+ dataset:
34
+ type: nuprl/MultiPL-E
35
+ name: MultiPL-HumanEval (C++)
36
+ metrics:
37
+ - name: pass@1
38
+ type: pass@1
39
+ value: 30.9
40
+ verified: false
41
+ - task:
42
+ type: text-generation
43
+ dataset:
44
+ type: nuprl/MultiPL-E
45
+ name: MultiPL-HumanEval (Java)
46
+ metrics:
47
+ - name: pass@1
48
+ type: pass@1
49
+ value: 32.1
50
+ verified: false
51
+ - task:
52
+ type: text-generation
53
+ dataset:
54
+ type: nuprl/MultiPL-E
55
+ name: MultiPL-HumanEval (JavaScript)
56
+ metrics:
57
+ - name: pass@1
58
+ type: pass@1
59
+ value: 32.1
60
+ verified: false
61
+ - task:
62
+ type: text-generation
63
+ dataset:
64
+ type: nuprl/MultiPL-E
65
+ name: MultiPL-HumanEval (PHP)
66
+ metrics:
67
+ - name: pass@1
68
+ type: pass@1
69
+ value: 24.2
70
+ verified: false
71
+ - task:
72
+ type: text-generation
73
+ dataset:
74
+ type: nuprl/MultiPL-E
75
+ name: MultiPL-HumanEval (Rust)
76
+ metrics:
77
+ - name: pass@1
78
+ type: pass@1
79
+ value: 23.0
80
+ verified: false
81
+ ---
82
+
83
+ AWQ quantized version of stable-code-3b model.
84
+
85
+ ---
86
+
87
+ # `stable-code-3b`
88
+
89
+ ## Model Description
90
+
91
+ `stable-code-3b` is a 2.7B billion parameter decoder-only language model pre-trained on 1.3 trillion tokens of diverse textual and code datasets. `stable-code-3b` is trained on 18 programming languages (selected based on the 2023 StackOverflow Developer Survey) and demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using [BigCode's Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main).
92
+
93
+ ![spiderchart](stable_code_3b_spiderchart.svg)
94
+
95
+ | Model | Size | Python | C++ | Javascript | Java | PHP | Rust |
96
+ |------------------|------|--------|------|------------|------|------|------|
97
+ | **Stable Code** | 3B | 32.4% | 30.9%| 32.1% | 32.1%| 24.2%| 23.0%|
98
+ | CodeLLama | 7B | 30.0% | 28.2%| 32.5% | 31.1%| 25.7%| 26.3%|
99
+ | Deepseek Coder | 1.3B | 28.6% | 29.2%| 28.7% | 29.0%| 23.6%| 18.5%|
100
+ | Wizard Coder | 3B | 31.6% | 25.6%| 26.2% | 25.8%| 25.3%| 20.4%|
101
+ | StarCoder | 3B | 21.6% | 19.8%| 21.5% | 20.5%| 19.0%| 16.9%|
102
+ | Replit Code V1.5 | 3B | 23.0% | 25.9%| 26.2% | 23.6%| 23.2%| 21.5%|
103
+ | Deci Coder | 1B | 19.1% | 6.8% | 18.4% | 16.7%| 2.1% | 1.7% |
104
+
105
+ **Key Features**
106
+ * Fill in Middle Capability (FIM)
107
+ * Supports Long Context, trained with Sequences upto 16,384
108
+
109
+ ## Usage
110
+
111
+ Get started generating text with `stable-code-3b` by using the following code snippet:
112
+
113
+ ```python
114
+ import torch
115
+ from transformers import AutoModelForCausalLM, AutoTokenizer
116
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b")
117
+ model = AutoModelForCausalLM.from_pretrained(
118
+ "stabilityai/stable-code-3b",
119
+ torch_dtype="auto",
120
+ )
121
+ model.cuda()
122
+ inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device)
123
+ tokens = model.generate(
124
+ **inputs,
125
+ max_new_tokens=48,
126
+ temperature=0.2,
127
+ do_sample=True,
128
+ )
129
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
130
+ ```
131
+
132
+ ### Run with Fill in Middle (FIM) ⚡️
133
+
134
+ <details>
135
+ <summary> Click to expand </summary>
136
+
137
+ ```python
138
+ from transformers import AutoModelForCausalLM, AutoTokenizer
139
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b")
140
+ model = AutoModelForCausalLM.from_pretrained(
141
+ "stabilityai/stable-code-3b",
142
+ torch_dtype="auto",
143
+ attn_implementation="flash_attention_2",
144
+ )
145
+ model.cuda()
146
+ inputs = tokenizer("<fim_prefix>def fib(n):<fim_suffix> else:\n return fib(n - 2) + fib(n - 1)<fim_middle>", return_tensors="pt").to(model.device)
147
+ tokens = model.generate(
148
+ **inputs,
149
+ max_new_tokens=48,
150
+ temperature=0.2,
151
+ do_sample=True,
152
+ )
153
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
154
+ ```
155
+
156
+ </details>
157
+
158
+ ### Run with Flash Attention 2 ⚡️
159
+
160
+ <details>
161
+ <summary> Click to expand </summary>
162
+
163
+ ```python
164
+ from transformers import AutoModelForCausalLM, AutoTokenizer
165
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b", trust_remote_code=True)
166
+ model = AutoModelForCausalLM.from_pretrained(
167
+ "stabilityai/stable-code-3b",
168
+ trust_remote_code=True,
169
+ torch_dtype="auto",
170
+ + attn_implementation="flash_attention_2",
171
+ )
172
+ model.cuda()
173
+ inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device)
174
+ tokens = model.generate(
175
+ **inputs,
176
+ max_new_tokens=48,
177
+ temperature=0.2,
178
+ do_sample=True,
179
+ )
180
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
181
+ ```
182
+
183
+ </details>
184
+
185
+
186
+ ## Model Details
187
+
188
+ * **Developed by**: [Stability AI](https://stability.ai/)
189
+ * **Model type**: `stable-code-3b` models are auto-regressive language models based on the transformer decoder architecture.
190
+ * **Language(s)**: English, Code
191
+ * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
192
+ * **License**: License: StabilityAI Non-Commercial Research Community License. If you want to use this model for your commercial products or purposes, please contact us [here](https://stability.ai/membership) to learn more.
193
+ * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
194
+
195
+ ### Model Architecture
196
+
197
+ The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
198
+
199
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
200
+ |----------------|-------------|--------|-------|-----------------|
201
+ | 2,796,431,360 | 2560 | 32 | 32 | 16384 |
202
+
203
+ * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
204
+ * **Tokenizer**: We use a modified version of the GPTNeoX Tokenizer.[`NeoX`](https://github.com/EleutherAI/gpt-neox). We add special tokens to train for Fill in the Middle (FIM) capabilities like `<FIM_PREFIX>` and `<FIM_SUFFIX>` along with other special tokens.
205
+
206
+ ## Training
207
+
208
+ ### Training Dataset
209
+
210
+ The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), along with [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) and [Github Issues](https://huggingface.co/datasets/bigcode/the-stack-github-issues) (BigCode., 2023), and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). We further supplement our training with data from mathematical domains ([Azerbayev, Zhangir, et al., 2023](https://arxiv.org/abs/2310.10631) and, [Yu, Longhui, et al., 2023](https://arxiv.org/abs/2309.12284)).
211
+
212
+ Top 18 programming languages trained on:
213
+ - C
214
+ - CPP
215
+ - Java
216
+ - JavaScript
217
+ - CSS
218
+ - Go
219
+ - HTML
220
+ - Ruby
221
+ - Rust
222
+ - Markdown
223
+ - Shell
224
+ - Php
225
+ - Sql
226
+ - R
227
+ - Typescript
228
+ - Python
229
+ - Jupyter-Clean
230
+ - RestructuredText
231
+
232
+ ### Training Procedure
233
+
234
+ The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW.
235
+
236
+ ### Training Infrastructure
237
+
238
+ * **Hardware**: `stable-code-3b` was trained on the Stability AI cluster across 256 NVIDIA A100 40GB GPUs (AWS P4d instances).
239
+
240
+ * **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
241
+
242
+ ## Use and Limitations
243
+
244
+ ### Intended Use
245
+
246
+ The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.
247
+
248
+ ### Limitations and Bias
249
+
250
+ As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
251
+
252
+ ## How to Cite
253
+
254
+ ```bibtex
255
+ @misc{stable-code-3b,
256
+ url={[https://huggingface.co/stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b)},
257
+ title={Stable Code 3B},
258
+ author={Pinnaparaju, Nikhil and Adithyan, Reshinth and Phung, Duy and Tow, Jonathan and Baicoianu, James and Cooper, Nathan}
259
+ }
260
+ ```
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "stable-code-3b",
3
+ "architectures": [
4
+ "StableLmForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 0,
9
+ "hidden_act": "silu",
10
+ "hidden_dropout": 0.0,
11
+ "hidden_size": 2560,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 6912,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 16384,
16
+ "model_type": "stablelm",
17
+ "num_attention_heads": 32,
18
+ "num_hidden_layers": 32,
19
+ "num_key_value_heads": 32,
20
+ "partial_rotary_factor": 0.25,
21
+ "quantization_config": {
22
+ "bits": 4,
23
+ "group_size": 128,
24
+ "modules_to_not_convert": null,
25
+ "quant_method": "awq",
26
+ "version": "gemm",
27
+ "zero_point": true
28
+ },
29
+ "rope_scaling": null,
30
+ "rope_theta": 1000000,
31
+ "tie_word_embeddings": false,
32
+ "torch_dtype": "float16",
33
+ "transformers_version": "4.39.3",
34
+ "use_cache": true,
35
+ "use_qkv_bias": false,
36
+ "vocab_size": 50304
37
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "do_sample": true,
5
+ "eos_token_id": 0,
6
+ "transformers_version": "4.39.3"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e87d410c02a7a1f74c1020c69652067d561bc577a67a27104908d31f30264965
3
+ size 1834207616
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<|padding|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "50254": {
21
+ "content": " ",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": false
27
+ },
28
+ "50255": {
29
+ "content": " ",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": false
35
+ },
36
+ "50256": {
37
+ "content": " ",
38
+ "lstrip": false,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": false
43
+ },
44
+ "50257": {
45
+ "content": " ",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": false
51
+ },
52
+ "50258": {
53
+ "content": " ",
54
+ "lstrip": false,
55
+ "normalized": true,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "50259": {
61
+ "content": " ",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "50260": {
69
+ "content": " ",
70
+ "lstrip": false,
71
+ "normalized": true,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": false
75
+ },
76
+ "50261": {
77
+ "content": " ",
78
+ "lstrip": false,
79
+ "normalized": true,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": false
83
+ },
84
+ "50262": {
85
+ "content": " ",
86
+ "lstrip": false,
87
+ "normalized": true,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": false
91
+ },
92
+ "50263": {
93
+ "content": " ",
94
+ "lstrip": false,
95
+ "normalized": true,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "50264": {
101
+ "content": " ",
102
+ "lstrip": false,
103
+ "normalized": true,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "50265": {
109
+ "content": " ",
110
+ "lstrip": false,
111
+ "normalized": true,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "50266": {
117
+ "content": " ",
118
+ "lstrip": false,
119
+ "normalized": true,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "50267": {
125
+ "content": " ",
126
+ "lstrip": false,
127
+ "normalized": true,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "50268": {
133
+ "content": " ",
134
+ "lstrip": false,
135
+ "normalized": true,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "50269": {
141
+ "content": " ",
142
+ "lstrip": false,
143
+ "normalized": true,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "50270": {
149
+ "content": " ",
150
+ "lstrip": false,
151
+ "normalized": true,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "50271": {
157
+ "content": " ",
158
+ "lstrip": false,
159
+ "normalized": true,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "50272": {
165
+ "content": " ",
166
+ "lstrip": false,
167
+ "normalized": true,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "50273": {
173
+ "content": " ",
174
+ "lstrip": false,
175
+ "normalized": true,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ },
180
+ "50274": {
181
+ "content": " ",
182
+ "lstrip": false,
183
+ "normalized": true,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": false
187
+ },
188
+ "50275": {
189
+ "content": " ",
190
+ "lstrip": false,
191
+ "normalized": true,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": false
195
+ },
196
+ "50276": {
197
+ "content": " ",
198
+ "lstrip": false,
199
+ "normalized": true,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": false
203
+ },
204
+ "50277": {
205
+ "content": "<fim_prefix>",
206
+ "lstrip": false,
207
+ "normalized": false,
208
+ "rstrip": false,
209
+ "single_word": false,
210
+ "special": true
211
+ },
212
+ "50278": {
213
+ "content": "<fim_middle>",
214
+ "lstrip": false,
215
+ "normalized": false,
216
+ "rstrip": false,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "50279": {
221
+ "content": "<fim_suffix>",
222
+ "lstrip": false,
223
+ "normalized": false,
224
+ "rstrip": false,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "50280": {
229
+ "content": "<fim_pad>",
230
+ "lstrip": false,
231
+ "normalized": false,
232
+ "rstrip": false,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "50281": {
237
+ "content": "<filename>",
238
+ "lstrip": false,
239
+ "normalized": false,
240
+ "rstrip": false,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "50282": {
245
+ "content": "<gh_stars>",
246
+ "lstrip": false,
247
+ "normalized": false,
248
+ "rstrip": false,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "50283": {
253
+ "content": "<issue_start>",
254
+ "lstrip": false,
255
+ "normalized": false,
256
+ "rstrip": false,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "50284": {
261
+ "content": "<issue_comment>",
262
+ "lstrip": false,
263
+ "normalized": false,
264
+ "rstrip": false,
265
+ "single_word": false,
266
+ "special": true
267
+ },
268
+ "50285": {
269
+ "content": "<issue_closed>",
270
+ "lstrip": false,
271
+ "normalized": false,
272
+ "rstrip": false,
273
+ "single_word": false,
274
+ "special": true
275
+ },
276
+ "50286": {
277
+ "content": "<jupyter_start>",
278
+ "lstrip": false,
279
+ "normalized": false,
280
+ "rstrip": false,
281
+ "single_word": false,
282
+ "special": true
283
+ },
284
+ "50287": {
285
+ "content": "<jupyter_text>",
286
+ "lstrip": false,
287
+ "normalized": false,
288
+ "rstrip": false,
289
+ "single_word": false,
290
+ "special": true
291
+ },
292
+ "50288": {
293
+ "content": "<jupyter_code>",
294
+ "lstrip": false,
295
+ "normalized": false,
296
+ "rstrip": false,
297
+ "single_word": false,
298
+ "special": true
299
+ },
300
+ "50289": {
301
+ "content": "<jupyter_output>",
302
+ "lstrip": false,
303
+ "normalized": false,
304
+ "rstrip": false,
305
+ "single_word": false,
306
+ "special": true
307
+ },
308
+ "50290": {
309
+ "content": "<empty_output>",
310
+ "lstrip": false,
311
+ "normalized": false,
312
+ "rstrip": false,
313
+ "single_word": false,
314
+ "special": true
315
+ },
316
+ "50291": {
317
+ "content": "<commit_before>",
318
+ "lstrip": false,
319
+ "normalized": false,
320
+ "rstrip": false,
321
+ "single_word": false,
322
+ "special": true
323
+ },
324
+ "50292": {
325
+ "content": "<commit_msg>",
326
+ "lstrip": false,
327
+ "normalized": false,
328
+ "rstrip": false,
329
+ "single_word": false,
330
+ "special": true
331
+ },
332
+ "50293": {
333
+ "content": "<commit_after>",
334
+ "lstrip": false,
335
+ "normalized": false,
336
+ "rstrip": false,
337
+ "single_word": false,
338
+ "special": true
339
+ },
340
+ "50294": {
341
+ "content": "<reponame>",
342
+ "lstrip": false,
343
+ "normalized": false,
344
+ "rstrip": false,
345
+ "single_word": false,
346
+ "special": true
347
+ },
348
+ "50295": {
349
+ "content": "<repo_continuation>",
350
+ "lstrip": false,
351
+ "normalized": false,
352
+ "rstrip": false,
353
+ "single_word": false,
354
+ "special": true
355
+ }
356
+ },
357
+ "bos_token": "<|endoftext|>",
358
+ "clean_up_tokenization_spaces": true,
359
+ "eos_token": "<|endoftext|>",
360
+ "model_max_length": 1000000000000000019884624838656,
361
+ "tokenizer_class": "GPTNeoXTokenizer",
362
+ "unk_token": "<|endoftext|>"
363
+ }