TechxGenus commited on
Commit
a52c001
1 Parent(s): bb33fa2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +219 -0
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference:
4
+ parameters:
5
+ temperature: 0.2
6
+ top_p: 0.95
7
+ widget:
8
+ - text: 'def print_hello_world():'
9
+ example_title: Hello world
10
+ group: Python
11
+ datasets:
12
+ - bigcode/the-stack-v2-train
13
+ license: bigcode-openrail-m
14
+ library_name: transformers
15
+ tags:
16
+ - code
17
+ model-index:
18
+ - name: starcoder2-15b
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ dataset:
23
+ name: CruxEval-I
24
+ type: cruxeval-i
25
+ metrics:
26
+ - type: pass@1
27
+ value: 48.1
28
+ - task:
29
+ type: text-generation
30
+ dataset:
31
+ name: DS-1000
32
+ type: ds-1000
33
+ metrics:
34
+ - type: pass@1
35
+ value: 33.8
36
+ - task:
37
+ type: text-generation
38
+ dataset:
39
+ name: GSM8K (PAL)
40
+ type: gsm8k-pal
41
+ metrics:
42
+ - type: accuracy
43
+ value: 65.1
44
+ - task:
45
+ type: text-generation
46
+ dataset:
47
+ name: HumanEval+
48
+ type: humanevalplus
49
+ metrics:
50
+ - type: pass@1
51
+ value: 37.8
52
+ - task:
53
+ type: text-generation
54
+ dataset:
55
+ name: HumanEval
56
+ type: humaneval
57
+ metrics:
58
+ - type: pass@1
59
+ value: 46.3
60
+ - task:
61
+ type: text-generation
62
+ dataset:
63
+ name: RepoBench-v1.1
64
+ type: repobench-v1.1
65
+ metrics:
66
+ - type: edit-smiliarity
67
+ value: 74.08
68
+ ---
69
+
70
+ # StarCoder2
71
+
72
+ <center>
73
+ <img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/starcoder2_banner.png" alt="SC2" width="900" height="600">
74
+ </center>
75
+
76
+ GPTQ quantized version of starcoder2-15b model.
77
+
78
+ ---
79
+
80
+ ## Table of Contents
81
+
82
+ 1. [Model Summary](#model-summary)
83
+ 2. [Use](#use)
84
+ 3. [Limitations](#limitations)
85
+ 4. [Training](#training)
86
+ 5. [License](#license)
87
+ 6. [Citation](#citation)
88
+
89
+ ## Model Summary
90
+
91
+ StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 4+ trillion tokens.
92
+ The model was trained with [NVIDIA NeMo™ Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/) using the [NVIDIA Eos Supercomputer](https://blogs.nvidia.com/blog/eos/) built with [NVIDIA DGX H100](https://www.nvidia.com/en-us/data-center/dgx-h100/) systems.
93
+
94
+ - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
95
+ - **Paper:** [Link](https://huggingface.co/papers/2402.19173)
96
+ - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
97
+ - **Languages:** 600+ Programming languages
98
+
99
+ ## Use
100
+
101
+ ### Intended use
102
+
103
+ The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
104
+
105
+ ### Generation
106
+ Here are some examples to get started with the model. You can find a script for fine-tuning in StarCoder2's [GitHub repository](https://github.com/bigcode-project/starcoder2).
107
+
108
+ First, make sure to install `transformers` from source:
109
+ ```bash
110
+ pip install git+https://github.com/huggingface/transformers.git
111
+ ```
112
+
113
+ #### Running the model on CPU/GPU/multi GPU
114
+ * _Using full precision_
115
+ ```python
116
+ # pip install git+https://github.com/huggingface/transformers.git # TODO: merge PR to main
117
+ from transformers import AutoModelForCausalLM, AutoTokenizer
118
+
119
+ checkpoint = "bigcode/starcoder2-15b"
120
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
121
+
122
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
123
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
124
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
125
+
126
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
127
+ outputs = model.generate(inputs)
128
+ print(tokenizer.decode(outputs[0]))
129
+ ```
130
+
131
+ * _Using `torch.bfloat16`_
132
+ ```python
133
+ # pip install accelerate
134
+ import torch
135
+ from transformers import AutoTokenizer, AutoModelForCausalLM
136
+
137
+ checkpoint = "bigcode/starcoder2-15b"
138
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
139
+
140
+ # for fp16 use `torch_dtype=torch.float16` instead
141
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
142
+
143
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
144
+ outputs = model.generate(inputs)
145
+ print(tokenizer.decode(outputs[0]))
146
+ ```
147
+ ```bash
148
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
149
+ Memory footprint: 32251.33 MB
150
+ ```
151
+
152
+ #### Quantized Versions through `bitsandbytes`
153
+ * _Using 8-bit precision (int8)_
154
+
155
+ ```python
156
+ # pip install bitsandbytes accelerate
157
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
158
+
159
+ # to use 4bit use `load_in_4bit=True` instead
160
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
161
+
162
+ checkpoint = "bigcode/starcoder2-15b"
163
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
164
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config)
165
+
166
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
167
+ outputs = model.generate(inputs)
168
+ print(tokenizer.decode(outputs[0]))
169
+ ```
170
+ ```bash
171
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
172
+ # load_in_8bit
173
+ Memory footprint: 16900.18 MB
174
+ # load_in_4bit
175
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
176
+ Memory footprint: 9224.60 MB
177
+ ```
178
+ ### Attribution & Other Requirements
179
+
180
+ The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
181
+
182
+ # Limitations
183
+
184
+ The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations.
185
+
186
+ # Training
187
+
188
+ ## Model
189
+
190
+ - **Architecture:** Transformer decoder with grouped-query and sliding window attention and Fill-in-the-Middle objective
191
+ - **Pretraining steps:** 1 million
192
+ - **Pretraining tokens:** 4+ trillion
193
+ - **Precision:** bfloat16
194
+
195
+ ## Hardware
196
+
197
+ - **GPUs:** 1024 x H100
198
+
199
+ ## Software
200
+
201
+ - **Framework:** [NeMo Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/)
202
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
203
+
204
+ # License
205
+
206
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
207
+
208
+ # Citation
209
+
210
+ ```bash
211
+ @misc{lozhkov2024starcoder,
212
+ title={StarCoder 2 and The Stack v2: The Next Generation},
213
+ author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
214
+ year={2024},
215
+ eprint={2402.19173},
216
+ archivePrefix={arXiv},
217
+ primaryClass={cs.SE}
218
+ }
219
+ ```