TheBloke commited on
Commit
629eb8f
1 Parent(s): f02d237

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -28
README.md CHANGED
@@ -1,6 +1,82 @@
1
  ---
2
  inference: false
3
- license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -21,12 +97,12 @@ license: other
21
 
22
  These files are GPTQ 4bit model files for [Bigcode's StarcoderPlus](https://huggingface.co/bigcode/starcoderplus).
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
29
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
31
 
32
  ## How to easily download and use this model in text-generation-webui
@@ -58,7 +134,6 @@ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
58
  import argparse
59
 
60
  model_name_or_path = "TheBloke/starcoderplus-GPTQ"
61
- model_basename = "gptq_model-4bit--1g"
62
 
63
  use_triton = False
64
 
@@ -74,31 +149,19 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
74
 
75
  print("\n\n*** Generate:")
76
 
77
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
78
- output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
79
- print(tokenizer.decode(output[0]))
80
-
81
- # Inference can also be done using transformers' pipeline
82
-
83
- # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
84
- logging.set_verbosity(logging.CRITICAL)
85
-
86
- prompt = "Tell me about AI"
87
- prompt_template=f'''### Human: {prompt}
88
- ### Assistant:'''
89
 
90
- print("*** Pipeline:")
91
- pipe = pipeline(
92
- "text-generation",
93
- model=model,
94
- tokenizer=tokenizer,
95
- max_new_tokens=512,
96
- temperature=0.7,
97
- top_p=0.95,
98
- repetition_penalty=1.15
99
- )
100
 
101
- print(pipe(prompt_template)[0]['generated_text'])
 
 
 
 
102
  ```
103
 
104
  ## Provided files
@@ -145,4 +208,94 @@ Thank you to all my generous patrons and donaters!
145
 
146
  # Original model card: Bigcode's StarcoderPlus
147
 
148
- No original model card was provided.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  inference: false
3
+ pipeline_tag: text-generation
4
+ license: bigcode-openrail-m
5
+ datasets:
6
+ - bigcode/the-stack-dedup
7
+ - tiiuae/falcon-refinedweb
8
+ metrics:
9
+ - code_eval
10
+ - mmlu
11
+ - arc
12
+ - hellaswag
13
+ - truthfulqa
14
+ library_name: transformers
15
+ tags:
16
+ - code
17
+ model-index:
18
+ - name: StarCoderPlus
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ dataset:
23
+ type: openai_humaneval
24
+ name: HumanEval (Prompted)
25
+ metrics:
26
+ - name: pass@1
27
+ type: pass@1
28
+ value: 26.7
29
+ verified: false
30
+ - task:
31
+ type: text-generation
32
+ dataset:
33
+ type: MMLU (5-shot)
34
+ name: MMLU
35
+ metrics:
36
+ - name: Accuracy
37
+ type: Accuracy
38
+ value: 45.1
39
+ verified: false
40
+ - task:
41
+ type: text-generation
42
+ dataset:
43
+ type: HellaSwag (10-shot)
44
+ name: HellaSwag
45
+ metrics:
46
+ - name: Accuracy
47
+ type: Accuracy
48
+ value: 77.3
49
+ verified: false
50
+ - task:
51
+ type: text-generation
52
+ dataset:
53
+ type: ARC (25-shot)
54
+ name: ARC
55
+ metrics:
56
+ - name: Accuracy
57
+ type: Accuracy
58
+ value: 48.9
59
+ verified: false
60
+ - task:
61
+ type: text-generation
62
+ dataset:
63
+ type: ThrutfulQA (0-shot)
64
+ name: ThrutfulQA
65
+ metrics:
66
+ - name: Accuracy
67
+ type: Accuracy
68
+ value: 37.9
69
+ verified: false
70
+ extra_gated_prompt: >-
71
+ ## Model License Agreement
72
+
73
+ Please read the BigCode [OpenRAIL-M
74
+ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
75
+ agreement before accepting it.
76
+
77
+ extra_gated_fields:
78
+ I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
79
+
80
  ---
81
 
82
  <!-- header start -->
 
97
 
98
  These files are GPTQ 4bit model files for [Bigcode's StarcoderPlus](https://huggingface.co/bigcode/starcoderplus).
99
 
100
+ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
101
 
102
  ## Repositories available
103
 
104
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
105
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starcoderplus-GGML)
106
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
107
 
108
  ## How to easily download and use this model in text-generation-webui
 
134
  import argparse
135
 
136
  model_name_or_path = "TheBloke/starcoderplus-GPTQ"
 
137
 
138
  use_triton = False
139
 
 
149
 
150
  print("\n\n*** Generate:")
151
 
152
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
153
+ outputs = model.generate(inputs)
154
+ print(tokenizer.decode(outputs[0]))
155
+ ```
 
 
 
 
 
 
 
 
156
 
157
+ ### Fill-in-the-middle
158
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
 
 
 
 
 
 
 
 
159
 
160
+ ```python
161
+ input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
162
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
163
+ outputs = model.generate(inputs)
164
+ print(tokenizer.decode(outputs[0]))
165
  ```
166
 
167
  ## Provided files
 
208
 
209
  # Original model card: Bigcode's StarcoderPlus
210
 
211
+ # StarCoderPlus
212
+
213
+ Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
214
+
215
+ ## Table of Contents
216
+
217
+ 1. [Model Summary](##model-summary)
218
+ 2. [Use](##use)
219
+ 3. [Limitations](##limitations)
220
+ 4. [Training](##training)
221
+ 5. [License](##license)
222
+ 6. [Citation](##citation)
223
+
224
+ ## Model Summary
225
+
226
+ StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
227
+ combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
228
+ It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
229
+ [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
230
+
231
+ - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
232
+ - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
233
+ - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
234
+ - **Languages:** English & 80+ Programming languages
235
+
236
+
237
+ ## Use
238
+
239
+ ### Intended use
240
+
241
+ The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
242
+
243
+ **Feel free to share your generations in the Community tab!**
244
+
245
+ ### Generation
246
+ ```python
247
+ # pip install -q transformers
248
+ from transformers import AutoModelForCausalLM, AutoTokenizer
249
+
250
+ checkpoint = "bigcode/starcoderplus"
251
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
252
+
253
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
254
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
255
+
256
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
257
+ outputs = model.generate(inputs)
258
+ print(tokenizer.decode(outputs[0]))
259
+ ```
260
+
261
+ ### Fill-in-the-middle
262
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
263
+
264
+ ```python
265
+ input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
266
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
267
+ outputs = model.generate(inputs)
268
+ print(tokenizer.decode(outputs[0]))
269
+ ```
270
+
271
+ ### Attribution & Other Requirements
272
+
273
+ The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
274
+
275
+ # Limitations
276
+
277
+ The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
278
+ Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
279
+
280
+ # Training
281
+ StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
282
+
283
+ ## Model
284
+ - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
285
+ - **Finetuning steps:** 150k
286
+ - **Finetuning tokens:** 600B
287
+ - **Precision:** bfloat16
288
+
289
+ ## Hardware
290
+
291
+ - **GPUs:** 512 Tesla A100
292
+ - **Training time:** 14 days
293
+
294
+ ## Software
295
+
296
+ - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
297
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
298
+ - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
299
+
300
+ # License
301
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).