svakhreev commited on
Commit
8e14232
1 Parent(s): f895684

Upload 11 files

Browse files
README.md CHANGED
@@ -1,3 +1,230 @@
1
  ---
 
 
 
 
 
 
2
  license: bigcode-openrail-m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: text-generation
3
+ inference: true
4
+ widget:
5
+ - text: 'def print_hello_world():'
6
+ example_title: Hello world
7
+ group: Python
8
  license: bigcode-openrail-m
9
+ datasets:
10
+ - bigcode/the-stack-dedup
11
+ metrics:
12
+ - code_eval
13
+ library_name: transformers
14
+ tags:
15
+ - code
16
+ model-index:
17
+ - name: StarCoderBase-1B
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ dataset:
22
+ type: openai_humaneval
23
+ name: HumanEval
24
+ metrics:
25
+ - name: pass@1
26
+ type: pass@1
27
+ value: 15.17
28
+ verified: false
29
+ - task:
30
+ type: text-generation
31
+ dataset:
32
+ type: nuprl/MultiPL-E
33
+ name: MultiPL-HumanEval (C++)
34
+ metrics:
35
+ - name: pass@1
36
+ type: pass@1
37
+ value: 11.68
38
+ verified: false
39
+ - task:
40
+ type: text-generation
41
+ dataset:
42
+ type: nuprl/MultiPL-E
43
+ name: MultiPL-HumanEval (Java)
44
+ metrics:
45
+ - name: pass@1
46
+ type: pass@1
47
+ value: 14.2
48
+ verified: false
49
+ - task:
50
+ type: text-generation
51
+ dataset:
52
+ type: nuprl/MultiPL-E
53
+ name: MultiPL-HumanEval (JavaScript)
54
+ metrics:
55
+ - name: pass@1
56
+ type: pass@1
57
+ value: 13.38
58
+ verified: false
59
+ - task:
60
+ type: text-generation
61
+ dataset:
62
+ type: nuprl/MultiPL-E
63
+ name: MultiPL-HumanEval (PHP)
64
+ metrics:
65
+ - name: pass@1
66
+ type: pass@1
67
+ value: 9.94
68
+ verified: false
69
+ - task:
70
+ type: text-generation
71
+ dataset:
72
+ type: nuprl/MultiPL-E
73
+ name: MultiPL-HumanEval (Lua)
74
+ metrics:
75
+ - name: pass@1
76
+ type: pass@1
77
+ value: 12.52
78
+ verified: false
79
+ - task:
80
+ type: text-generation
81
+ dataset:
82
+ type: nuprl/MultiPL-E
83
+ name: MultiPL-HumanEval (Rust)
84
+ metrics:
85
+ - name: pass@1
86
+ type: pass@1
87
+ value: 10.24
88
+ verified: false
89
+ - task:
90
+ type: text-generation
91
+ dataset:
92
+ type: nuprl/MultiPL-E
93
+ name: MultiPL-HumanEval (Swift)
94
+ metrics:
95
+ - name: pass@1
96
+ type: pass@1
97
+ value: 3.92
98
+ verified: false
99
+ - task:
100
+ type: text-generation
101
+ dataset:
102
+ type: nuprl/MultiPL-E
103
+ name: MultiPL-HumanEval (Julia)
104
+ metrics:
105
+ - name: pass@1
106
+ type: pass@1
107
+ value: 11.31
108
+ verified: false
109
+ - task:
110
+ type: text-generation
111
+ dataset:
112
+ type: nuprl/MultiPL-E
113
+ name: MultiPL-HumanEval (R)
114
+ metrics:
115
+ - name: pass@1
116
+ type: pass@1
117
+ value: 5.37
118
+ verified: false
119
+ extra_gated_prompt: >-
120
+ ## Model License Agreement
121
+
122
+ Please read the BigCode [OpenRAIL-M
123
+ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
124
+ agreement before accepting it.
125
+
126
+ extra_gated_fields:
127
+ I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
128
+ duplicated_from: bigcode-data/starcoderbase-1b
129
  ---
130
+
131
+
132
+ # StarCoderBase-1B
133
+
134
+ 1B version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase).
135
+
136
+ ## Table of Contents
137
+
138
+ 1. [Model Summary](##model-summary)
139
+ 2. [Use](##use)
140
+ 3. [Limitations](##limitations)
141
+ 4. [Training](##training)
142
+ 5. [License](##license)
143
+ 6. [Citation](##citation)
144
+
145
+ ## Model Summary
146
+
147
+ StarCoderBase-1B is a 1B parameter model trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
148
+
149
+ - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
150
+ - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
151
+ - **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
152
+ - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
153
+ - **Languages:** 80+ Programming languages
154
+
155
+
156
+ ## Use
157
+
158
+ ### Intended use
159
+
160
+ The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
161
+
162
+ **Feel free to share your generations in the Community tab!**
163
+
164
+ ### Generation
165
+ ```python
166
+ # pip install -q transformers
167
+ from transformers import AutoModelForCausalLM, AutoTokenizer
168
+
169
+ checkpoint = "bigcode/starcoderbase-1b"
170
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
171
+
172
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
173
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
174
+
175
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
176
+ outputs = model.generate(inputs)
177
+ print(tokenizer.decode(outputs[0]))
178
+ ```
179
+
180
+ ### Fill-in-the-middle
181
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
182
+
183
+ ```python
184
+ input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
185
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
186
+ outputs = model.generate(inputs)
187
+ print(tokenizer.decode(outputs[0]))
188
+ ```
189
+
190
+ ### Attribution & Other Requirements
191
+
192
+ The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
193
+
194
+ # Limitations
195
+
196
+ The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
197
+
198
+ # Training
199
+
200
+ ## Model
201
+
202
+ - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
203
+ - **Pretraining steps:** 500k
204
+ - **Pretraining tokens:** 1 trillion
205
+ - **Precision:** bfloat16
206
+
207
+ ## Hardware
208
+
209
+ - **GPUs:** 128 Tesla A100
210
+ - **Training time:** 11 days
211
+
212
+ ## Software
213
+
214
+ - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
215
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
216
+ - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
217
+
218
+ # License
219
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
220
+ # Citation
221
+ ```
222
+ @article{li2023starcoder,
223
+ title={StarCoder: may the source be with you!},
224
+ author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
225
+ year={2023},
226
+ eprint={2305.06161},
227
+ archivePrefix={arXiv},
228
+ primaryClass={cs.CL}
229
+ }
230
+ ```
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/fsx/bigcode/experiments/pretraining/conversions/starcoder-1b",
3
+ "activation_function": "gelu_pytorch_tanh",
4
+ "architectures": [
5
+ "GPTBigCodeForCausalLM"
6
+ ],
7
+ "attention_softmax_in_fp32": true,
8
+ "multi_query": true,
9
+ "attn_pdrop": 0.1,
10
+ "bos_token_id": 0,
11
+ "embd_pdrop": 0.1,
12
+ "eos_token_id": 0,
13
+ "inference_runner": 0,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "max_batch_size": null,
17
+ "max_sequence_length": null,
18
+ "model_type": "gpt_bigcode",
19
+ "n_embd": 2048,
20
+ "n_head": 16,
21
+ "n_inner": 8192,
22
+ "n_layer": 24,
23
+ "n_positions": 8192,
24
+ "pad_key_length": true,
25
+ "pre_allocate_kv_cache": false,
26
+ "resid_pdrop": 0.1,
27
+ "scale_attention_softmax_in_fp32": true,
28
+ "scale_attn_weights": true,
29
+ "summary_activation": null,
30
+ "summary_first_dropout": 0.1,
31
+ "summary_proj_to_labels": true,
32
+ "summary_type": "cls_index",
33
+ "summary_use_proj": true,
34
+ "torch_dtype": "float32",
35
+ "transformers_version": "4.28.1",
36
+ "use_cache": true,
37
+ "validate_runner_input": true,
38
+ "vocab_size": 49152
39
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.28.1"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fdaceae5e9a3dc21d255292490e8db4b008c4ff2ee0693c9edd914d566bd15e
3
+ size 4548859752
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0d2747ca5eab6c037184ad171739f1eff537cefb7220acbf13347ff55946492
3
+ size 4548920521
special_tokens_map.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<fim_prefix>",
5
+ "<fim_middle>",
6
+ "<fim_suffix>",
7
+ "<fim_pad>",
8
+ "<filename>",
9
+ "<gh_stars>",
10
+ "<issue_start>",
11
+ "<issue_comment>",
12
+ "<issue_closed>",
13
+ "<jupyter_start>",
14
+ "<jupyter_text>",
15
+ "<jupyter_code>",
16
+ "<jupyter_output>",
17
+ "<empty_output>",
18
+ "<commit_before>",
19
+ "<commit_msg>",
20
+ "<commit_after>",
21
+ "<reponame>"
22
+ ],
23
+ "bos_token": "<|endoftext|>",
24
+ "eos_token": "<|endoftext|>",
25
+ "unk_token": "<|endoftext|>"
26
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "additional_special_tokens": [
4
+ "<|endoftext|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<empty_output>",
19
+ "<commit_before>",
20
+ "<commit_msg>",
21
+ "<commit_after>",
22
+ "<reponame>"
23
+ ],
24
+ "bos_token": "<|endoftext|>",
25
+ "eos_token": "<|endoftext|>",
26
+ "model_max_length": 1000000000000000019884624838656,
27
+ "tokenizer_class": "GPT2Tokenizer",
28
+ "unk_token": "<|endoftext|>",
29
+ "vocab_size": 49152
30
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff